How to fight Biometric Mass Surveillance after the AI Act: A legal and practical guide

The EU's Artificial Intelligence Act has been adopted, laying out an in-principle ban on live mass facial recognition and other public biometric surveillance by police. Yet the wide exceptions to this ban may pave the way to legitimise the use of these systems. This living guide, for civil society organisations, communities and activists, charts a human rights-based approach for how to keep resisting biometric mass surveillance practices now and in the future

By EDRi · May 27, 2024

Note: this is a dynamic and evolving piece of work which we expect to change over time as our assessment of the final AI Act matures, and as new opportunities (and challenges) arise. Any changes made after the initial publication of this guide will be clearly marked.

Contents

Additional sections on high-risk biometrics; biometric categorisation; emotion recognition; biometric systems in border and migration contexts; biometrics definitions; and how to make the use of a global precedent for a ban will also be added in due course.

Introduction

The future of the EU’s fight against biometric mass surveillance

(Return to top)

Throughout spring 2024, European Union (EU) lawmakers have been taking the final procedural steps to pass a largely disappointing new law, the EU Artificial Intelligence (AI) Act.

This law is expected to come into force in the summer, with one of the most hotly-contested parts of the law – the bans on unacceptably harmful uses of AI – slated to apply from the beginning of 2025 (six months and 20 days after the legal text is officially published).

The first draft of this Act, in 2021, proposed to ban some forms of public facial recognition, showing that lawmakers were already listening to the demands of our Reclaim Your Face campaign. Since then, the AI Act has continued to be a focus point for our fight to stop people being treated as walking barcodes in public spaces.

But after a gruelling three-year process, AI Act negotiations are coming to an underwhelming end, with numerous missed opportunities to protect people’s rights and freedoms (especially for people on the move) or to uphold civic space.

One of the biggest problems we see is that the bans on different forms of biometric mass surveillance, or BMS, are full of holes. BMS is the term we’ve used as an umbrella for different methods of using people’s biometric data to surveil them in an untargeted or arbitrarily-targeted way – which have no place in a democratic society.

At the same time, all is not lost. As we get into the nitty-gritty of the final text, and reflect on the years of hard work, we mourn the existence of the dark clouds – yet we also celebrate the silver linings and the opportunities they create to better protect people’s sensitive biometric data.

Whilst the AI Act is supposed to ban a lot of unacceptable biometric practices, we’ve argued since the beginning that it could instead become a blueprint for how to conduct BMS.

As we predicted, the final Act takes a potentially dystopian step towards legalising live public facial recognition – which so far has never been explicitly allowed in any EU country. The same goes for pseudoscientific AI mind-reading systems, which the AI Act shockingly allows states to use in policing and border contexts. Using machines to categorise people’s gender and other sensitive characteristics, based on how they look, is also allowed in several contexts.

We have long argued that these practices can never be compatible with our fundamental rights to dignity, privacy, data protection, free expression and non-discrimination. By allowing them in a range of contexts, the AI Act risks legitimising these horrifying practices.

 

Yet whilst the law falls far short of the full ban on biometric mass surveillance in public spaces that we called for, it nevertheless offers many points to continue our fight in the future. To give one example, we have the powerful opportunity to capitalise on the wide political will in support of our ongoing work against BMS to make sure that the AI Act’s loopholes don’t make it into national laws in EU member states – with strong indications of support already in Austria and Germany.

The below legal and practical guide on ‘How to Fight Biometric Mass Surveillance after the AI Act’ is intended to inform and equip those who are reflecting and re-fuelling for the next stage in the fight against BMS. This includes charting out more than a dozen specific advocacy opportunities including formal and informal spaces to influence, and highlighting the parts of the legal text that create space for our advocacy efforts.

We also remind ourselves that whilst the biometrics bans have been dangerously watered down, the Act nevertheless accepts that we must ban AI systems that are not compatible with a democratic society. This idea has been a vital concept for those of us working to protect human rights in the age of AI, and we faced a lot of opposition on this point from industry and conservative lawmakers.

This legal and normative acceptance of the need for AI bans has the potential to set an important global precedent for putting the rights and freedoms of people and communities ahead of the private interests of the lucrative security and surveillance tech industry. The industry want all technologies and practices to be on the table – but the AI Act shows that this is not the EU’s way.

Executive summary

This guide details numerous opportunities to fight biometric mass surveillance

(Return to top)

  1. From early 2025, some uses of real-time remote biometric identification (RBI) by police in publicly-accessible spaces; emotion recognition in workplaces and education settings; certain types of biometric categorisation; and all scraping of the internet to get facial images for biometric databases will be banned in the EU;
  2. EU Member States can extend this ban at any time in order to fully ban the use of real-time (live) RBI in publicly-accessible spaces for the purposes of law enforcement. They could also choose to extend this to all actors (i.e. including private actors and state actors other than police);
  3. Post (retrospective) RBI for law enforcement purposes (whether in public or non-public spaces) is not banned, but has to follow additional procedural rules which will apply from around mid-2026. Member States can only use these systems if they have updated their national laws to meet the AI Act’s conditions, meaning that these practices are de-facto banned until that point. Like with real-time use, Member States can also extend this to further restrict or fully ban the use of post (retrospective) post remote biometric identification for the purposes of law enforcement. They could also choose to extend limits/bans to all actors (i.e. including private actors and state actors other than police);
  4. Even in the worst case scenario – where Member States instead authorise real-time and post RBI use by police – there are still opportunities to push for stronger interpretations, to litigate against the authorisation on the basis of it still entailing severe human rights infringements, and to lodge formal complaints with Data Protection Authorities (DPAs) or with AI Act authorities. Collectively, all these things could contribute to further limiting or stopping biometric mass surveillance practices in the EU;
  5. The use of either real-time or post RBI by any actor other than police (i.e. private actors, state authorities other than police) is considered already prohibited by the AI Act. If passing national bans on police, Member States could choose to make this explicit, but already the AI Act gives us a clear basis to contest any such uses.

This, and all further analysis in the Guide, is based on 2021_0106(COR01), which is the final version of the Act that will become EU law. Once the final version is published in the EU’s Official Journal, we will update this page with a link to the official version of the law.

This guide builds on the ideas, work and expertise of the EDRi network, in particular its biometric working group and AI core group, the Reclaim Your Face campaign, academics, and several lawmakers and advisors in the European Parliament and European Commission in the past several years.

Where specific work has been used, this is referenced, but we also want to expressly thank Karolina Iwańska, Douwe Korff and Plixavra Vogiatzoglou for their insights.

We also want to highlight the long-running biometrics/AI Act collaboration with and contributions from Access Now (Daniel Leufer and Caterina Rodelli), Algorithm Watch (Nikolett Aszodi and Kilian Vieth-Ditlmann), European Disability Forum (Kave Noori), Amnesty International (Mher Hakobyan), Bits of Freedom (Lotte Houwing and Nadia Benaissa), IT-Pol (Jesper Lund), ECNL (Karolina Iwańska), La fede.cat and Algorights (Judith Membrives), Hermes Center (Alessandra Bormioli and Claudio Agosti) and ARTICLE 19 (Vidushi Marda). Without their work, dedication and expertise, this guide would not exist.

Part 1

The AI Act framework

(Return to top)

Key terminology

Remote biometric identification, the term chosen in the AI Act to refer to uses of biometric identification systems like facial recognition, but could use any biometric feature, and which have the potential to scan many people at once. However, this term is not defined precisely in the AI Act, nor elsewhere, leaving room for interpretation

The ban on real-time RBI by police is related to public spaces only. Whilst in general the Act defines this broadly, which we support, it excludes borders and migration detention centers, which we strongly oppose

A live use of an RBI system

A retrospective use of an RBI system

As noted by emeritus Professor Douwe Korff, the concept of law enforcement purposes is used in a worryingly broad way in the AI Act to potentially include other state and non-state actors who have been empowered to carry out a law enforcement task. Korff notes that this could include, for example, banks when carrying out money laundering checks

To understand how the biometric prohibitions in the AI Act work, we need to understand the over-arching framework of the whole Act. Broadly speaking, the EU’s AI Act sorts uses of AI systems into six categories, and then applies a set of specific rules accordingly.

Four explicit categories:

Article 5: “Prohibited AI Practices”

These are specific uses of AI systems that pose an unacceptable risk to people’s fundamental human rights or safety are banned. These systems can be found in Article 5. Relevant prohibitions on biometric mass surveillance practices:

However, it is important to note that there are three exceptions to this ban. If police want to use one of these exceptions, the use is considered high-risk, and will additionally have to follow extra biometrics-specific rules

Namely workplace and education institutions, except if it’s for “medical or safety reasons”, a concerning loophole

Such biometric categorisation systems are banned, but with the exclusion of gender, gender identity, health status and disability, and with an additional exception for police using these tools for “labelling or filtering”

The scraping of the internet or of CCTV footage by AI systems in order to build or expand facial recognition databases – like the notorious Clearview AI software – is banned (Art. 5.1.(e))

Article 6 and Annex III: “High-risk AI systems”

According to Article 6, AI systems that pose a high risk to people’s rights or safety have to follow a series of steps and rules, including on things like data quality, documentation and fundamental rights impact assessments (FRIAs). These rules constitute the main part of the AI Act, which in the initial European Commission impact assessment were predicted would apply to between 5% and 15% of all AI systems on the EU market. The systems that are considered high risk are set by the “areas” listed in Annex III of the text.

However, Article 6.4. allows those developing or selling the AI system (“providers”) to self-assess whether they fall under Annex III, and there is no systematic check of their assessment. The conformity assessment for high-risk AI systems (proof of compliance with the AI Act) is also self-assessed, with the exception of biometric systems.

  • Annex III, the annex of high-risk areas, includes any remote biometric identification (real-time or post), emotion recognition or biometric categorisation systems that aren’t ruled out by one of the prohibitions. This means that “post” (retrospective) police use of remote biometric identification systems are always high risk. Any actor developing, deploying or using these systems therefore needs to follow all the rules in Chapter III of the AI Act;
  • Whilst for all other high-risk AI systems, the conformity assessment (proof of compliance with the AI Act) is self-assessed, this is not the case for “AI systems intended to be used for biometrics” (Recital 125), which must have a third-party “testing, certification and inspection” (definition in Article 3.(21));
  • In addition, any use of real-time (live) remote biometric identification by police in publicly-accessible spaces, and which is based on one of the listed exceptions the RBI prohibition, has to follow several additional rules compared to other high-risk systems. These extra rules for the use of real-time RBI are laid out in Article 5.2. and explained in more detail in Part 2 of this guide.

Article 50: Certain AI systems”

A small number of systems (e.g. “synthetic … content”, also known as deepfakes) are subject to some small additional transparency requirements in Article 50. This includes things like making generated content “detectable as artificially generated or manipulated” (Art. 50.2. and 50.4.), although there are exceptions to this rule. Whilst these systems described in Article 50. have sometimes been referred to as low-risk systems, the text does not actually describe them as such. There are also specific rules for emotion recognition (ER) and biometric categorisation (BC) systems when such uses aren’t covered by the ban (e.g. biometric categorisation on the basis of gender, which is not banned). The use of these ER and BC systems also has a specific requirement to “inform” anyone who is exposed to the use of such a system, except when the system is used for law enforcement purposes.

Articles 53 – 55: “General purpose AI [GPAI] models”

Articles 53 to 55 lay out specific rules for general purpose AI models like ChatGPT, including keeping up-to-date technical records and documentation and having policies for compliance with copyright law. Where general-purpose AI systems pose a “systemic risk”, there are enhanced rules including on testing and cybersecurity. Further guidelines on GPAI are also going to be developed by the EU AI Office.

One implicit category:

Negligible risk systems

These systems – the majority of AI systems that will be on the EU market – do not have to follow any of the AI Act’s rules (but of course must still follow other applicable rules, such as on data protection or product safety).

Providers of these systems – the majority of those in the EU – are instead encouraged to adhere to voluntary codes of conduct and other voluntary ethical rules which will be developed.

And a biometrics-specific category:

Post-remote biometric identification

All forms of retrospective RBI, regardless of who is using the system, are considered high risk, and therefore have to follow the rules mentioned above, as well as undergoing a third-party conformity assessments. What’s more, there are additional provisions listed in Article 26. for any use of RBI that happens after the fact (post) – most commonly by applying biometric identification algorithms to CCTV or other video footage. These additional rules include things like requiring authorisation, and for the identification to be “targeted”.

In order to implement the AI Act’s rules, the European Commission, supported by several newly-created fora and bodies, have to develop guidelines and other procedures for how to interpret them. These guidelines will include how Member States should interpret the prohibitions and the high-risk systems, which makes them very important.

Advocacy opportunity #1

Influence guidelines, codes of practice and other EU-level processes (NOW)

(Return to top)

Non-exhaustively, some of the bodies and processes which will be most relevant for the fight against BMS include:

  • The European Commission: the Directorate-General for Communications Networks, Content and Technology (DG Connect) is responsible for delivering key guidelines on the implementation and interpretation of the AI Act (Article 96). NGOs already advocating at EU level can include attempts to influence the drafting of these guidelines into their advocacy work, or via national channels with influence on the EU (e.g. telecommunications ministries). In addition, as official Commission documents, all draft guidelines will be presented for a public consultation, creating a formal opportunity for civil society to provide our feedback;
    • Most important from a biometrics perspective will be the part on prohibitions (Art. 96.1.(b)) and on high-risk AI systems (Article 6.5.). This will include a list of “practical examples” of systems that are considered high risk and those that are not, which will be very influential on future protections (or lack of);
    • Subsequently, the European Commission will also develop templates for Member States to report about their use of RBI, which could be important for us to try to influence (Article 5.4.);
  • The AI Office: the recently-established European AI Office, established under Article 64, will be made predominantly of existing staff from the European Commission’s Directorate-General (DG) Connect, e supplemented by recruited experts in technology and policy, and complemented with participation from other stakeholders. They will create, or support creation of, codes of practice (Article 56), codes of conduct (Article 95) and other tools, which will be important for ensuring a strong and coherent application of the AI Act’s rules. You can express interest in staying up to date in opportunities for civil society input, and see the latest about the Office and its Advisory Forum;
  • The Scientific Panel: the AI Office and the Commission will be supported by a Scientific Panel, which represents an opportunity for technologists to ensure robust and evidence-based implementing rules for the AI Act. We also want to push for this panel to include social sciences expertise, which will be critical for a rights-protective approach to AI, and should include this as key advocacy messages in our work on the AI Act;
  • The EU AI Board / ‘the Board’: the Act Article 65 also creates a formal European Artificial Intelligence Board, made up of one representative from each EU member state, with observership for the European Data Protection Supervisor, and attendance in meetings by the AI Office. This board oversees the enforcement of the AI Act. The work of civil society can highlight issues to them or call on them to react to certain developments / areas of concern at EU or national level;
  • The Advisory Forum: The AI Board and the European Commission will be supported in their work by an Advisory Forum, Article 67, including industry/commercial entities, academia, certain representatives of the EU member states, standard-making bodies, other EU agencies, and civil society. This is a formal opportunity to have a seat at the table, as well as an opportunity to advocate towards a rights-respecting implementation of the AI Act. As well as getting a formal seat through civil society, those who want to influence can also seek a seat through existing participation in standard-setting bodies, in particular CEN-CENELEC;
  • Competent authorities: each country will set up an authority, known as a ‘market surveillance authority’ to enforce the AI Act. This may be a part of an existing authority or it might be completely new. They will work with the country’s national human rights authority to ensure compliance with the AI Act. Along with existing data protection authorities, these authorities will be very important for getting the most rights-protective outcomes from the AI Act, so already we can advocate nationally to make sure that the authorities appointed for this role are independent, credible and have strong fundamental rights expertise.

Part 2

 Live remote biometric identification (RBI)

(Return to top)

One of the most high-profile forms of biometric mass surveillance is the use of live facial recognition (or other biometric surveillance, like recognition of people’s eyes, ears, or gait) to track people across public spaces. This can alert whoever is using the system to the identity of everyone that is in a particular space in real-time.

When piloted in London, for example, these systems repeatedly misidentified people (especially black men) and – despite claims from the police of high success rates – were actually accurate only about 19% of the time. People who tried to avoid walking past these cameras, or exercised their legitimate right to resist them by covering their face, were stopped and in some cases, fined.

Once the AI Act comes into force, the use of live public facial recognition and other biometric identification systems by police (or, in AI Act terms, “real-time remote biometric identification in publicly-accessible spaces for the purpose of law enforcement”) will be in principle banned across the entire EU.

This means that from the early in of 2025 (which is when the bans are estimated to start applying) any EU police force trying to use such systems will be automatically in contravention of the AI Act – which is a win for the fight against BMS!

This ban does not apply to individual and genuinely consent-based uses of biometric identification, like unlocking your phone or going through an e-Passport gate. The ban is about the mass uses in our streets and other public spaces, which treat all of us as guilty until proven innocent, and which we have long argued can never be strictly necessary and proportionate according to human rights law.

Advocacy opportunity #2

Exercise the new right to complain about illegal uses (FUTURE)

(Return to top)

Lodge a formal complaint with the national AI Act authority

  • Any use of real-time biometric identification in a publicly-accessible space by police – such as deploying facial recognition against passers-by in a city center to see if any of them match suspects on a watch-list – will become expressly illegal across the EU from the early in 2025. The AI Act creates a right (Article 85) for anyone to lodge a complaint with the relevant ‘market surveillance authority’ (national AI Act oversight and enforcement body).
  • This right will therefore be important in 2025 and beyond, in order to contest pilots or deployments of live facial recognition or other biometric identification systems by police in publicly-accessible spaces. We will know more about how to complain to these authorities once they have been formally established (if they are starting from scratch) or designated (if an existing authority is being given this power) by each EU Member State government, which is happening in summer 2024;
    • The European Commission’s AI Office (see Part 1 for more information) is also empowered to be involved in joint complaints, for example those spanning across several EU Member States. This makes the AI Office – predominantly staffed by the European Commission – an important entity for future complaints, too;
    • These complaints can be facilitated by information provided by (or pursued in collaboration with) national human rights, consumer and/or equality bodies and institutions, who are empowered under Article 77 to investigate possible infringements of the AI Act;

Lodge a complaint under data protection law

  • In addition, the EU’s national data protection authorities (DPAs) will continue to oversee compliance with EU data protection law. We can, therefore, also complain to DPAs about the unlawful processing of biometric data as we would with any other violation of our personal data:
    • Article 10 of the Law Enforcement Directive (LED) (the police version of the GDPR) does not expressly prohibit the processing of biometric data by police (although we are not aware of any European DPA ever authorising real-time RBI by police). So one benefit of the AI Act is that it gives us a clearer interpretation that, unless specifically authorised in a national law in accordance with the AI Act, any use of real-time RBI by police is always illegal;
    • In addition, any use of real-time RBI falling outside one of those exceptions (e.g. police using real-time RBI to control/monitor access to an event or a demonstration) would always be illegal under both the LED and the AI Act;
    • Once the high-risk parts of the AI Act enter into application in 2026, there will be additional opportunities to fight uses that don’t comply with the high-risk rules discussed in part in Part 1, such as mandatory third-party compliance checks;

The unfortunate caveat, and one of our biggest concerns about the AI Act when it comes to the fight against BMS, however, is that the Act gives the Member State governments the option to create national exceptions to the above-mentioned ban on real-time RBI by police. They can authorise these exceptions by passing national laws to legalise real-time remote biometric identification in publicly-accessible spaces.

This means that the fight against BMS will shift quite considerably to EU Member-State level, giving us another opportunity to stop these practices. At the same time, this also shifts the fight from one central point (the EU AI Act) to up to twenty-seven different different national laws – possibly making our work a lot harder!

Although there are several additional safeguards in the AI Act that create more opportunities to limit or stop the use of real-time RBI by police if/when EU Member States decide to authorise the exceptions (see below), the ideal outcome would be for this to never be possible in the first place.

Advocacy opportunity #3

Advocate/mobilise for EU Member States to pass full RBI bans at national level (NOW)

(Return to top)

The ideal: a full ban on all RBI at national level

  • Rather than passing national laws to authorise real-time remote biometric identification in publicly-accessible spaces, EU Member State governments conversely have the opportunity to fully ban these practices;
  • We would expect that countries like Germany and Austria, whose representatives were the most vocal about the threats of real-time RBI during AI Act negotiations, would be most amenable to a full ban. In Germany, a Parliamentary hearing has already been used to explore this opportunity;
    • Prior to the final agreement of the AI Act by EU Member States, the government of Austria presented a statement which criticised, among other things, the AI Act’s rules on biometric systems. They argued that the exceptions to the real-time RBI ban “are too far-reaching and do not correspond to the Austrian understanding of a proportionate interference with the fundamental rights of citizens.”
    • This Opinion is based on EU law, not Austrian law, meaning that this document can and should be used to fight against both real-time and retrospective (post) RBI under the AI Act;
  • Based on the rules established in the AI Act, this would be most straightforward for real-time (live) remote biometric identification in publicly-accessible spaces, and could be achieved through simple wording, which references the EU AI Act. The following text would be a strong formulation, although it might have to be adjusted to fit/reference existing national criminal/police laws:

“In accordance with Article 9 of EU law 2016/679, Article 10 of EU law 2016/680 and with consideration for EU law 2021/0106, in particular Recital (37) and Articles 5.5 and 26.10, the following AI practice shall be prohibited: the use of remote biometric identification systems.”

A more granular and customisable alternative

  • An alternative option would be to list all the practices that we want to be prohibited, e.g.:
    • The use of real-time remote biometric identification systems by law enforcement agencies or on their behalf”;
    • The use of real-time remote biometric identification systems by public authorities or on their behalf”;
    • The use of real-time remote biometric identification systems by private actors”;
    • etc.
    • This could (should) then be expanded to include post uses (see Part 3);

A more conservative alternative

  • A more conservative / less ideal formulation would be to limit the ban to the parts expressly prohibited in the Act. However, we warn that this should be only a back-up option, as a ban on any and all RBI would an enormous step in the fight against BMS. There is also the fact that some lawmakers might dismiss a ban that is exactly the same as the ban in the AI Act by claiming that a national ban would be redundant; whereas by adding additional provisions, there is a stronger argument to justify national intervention. However, if this is the only feasible option, suggested wording could incorporate some or all of the text in bold:
    • In accordance with Article 10 of EU law 2016/680 and EU law 2021/0106, in particular Recital (37) and Article 5.5, the following AI practice shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement.
  • If ‘real-time’ and ‘post’ uses are dealt with separately under national laws, we still should push for both to be prohibited. Strategically, this can also help our arguments for why a national full ban is needed, even though real-time RBI in publicly-accessible spaces by police is banned if they don’t pass an authorising law;

Alternative strategies

  • As mentioned above, real-time RBI by police in publicly-accessible spaces will be prohibited in the absence of a national authorising law. So an alternative approach to the above would be to simply prevent any authorising law from being passed;
    • Pros: this approach is simpler and doesn’t run the risk of being watered down during a political negotiation/compromise process. It is likely to be the ‘safer’ option in Member States where there is not a lot of support for the idea of a national ban;
    • Cons: this would be more vulnerable to political changes, whereas a clear legal ban would arguably be more robust/sustainable. It is likely to be the ‘safer’ option in Member States where there is clear support for a national ban;
  • Specific moments in the process can also be good opportunities to mobilise the public, national lawmakers and authorities etc. to raise awareness of what is coming:
    • The entry into force of the AI Act, 20 days after it is published in the Official Journal, currently predicted for August 2024;
    • The entry into application of the prohibitions in early 2025.

If a Member State does decide to pass an authorising law for the use of real-time RBI by police in publicly-accessible spaces, there are still steps we can take to push back. Specifically in the case of real-time RBI, the EU’s AI Act is a ceiling, not a floor. Member State governments do not have to authorise all of the exceptions, but can instead choose to be more limited and restrained.

Advocacy opportunity #4

Map the current situation and future aspirations of national real-time RBI laws (NOW)

(Return to top)

  • Something that would be really useful for our work would be to have a comprehensive overview / mapping of the following:
    • What are the national rules currently applicable to the use of real-time RBI in publicly-accessible spaces by law enforcement authorities in each EU Member State?
    • What changes would be needed to be made to those national laws to make them compliant with the AI Act’s exceptions?
    • What level of appetite is there to make these changes?
  • n.b. the same mapping could also be done for post uses – see Part 3.

Advocacy opportunity #5

Strictly limit the scope of the exceptions in the national authorising laws (REACTIVE ONLY)

(Return to top)

In the event that Member States go ahead with national authorising laws, we should push for as few of the exceptions to be put into this law as possible, and making the scope of the exceptions narrower.

  • A national authorising law for real-time RBI is required by Article 5.5., which states that the authorisations must be laid out in a national law, and further that this national law can be more restrictive than what is in the Act. This can be done in one or both of the following ways:
    • Authorising only one or two of the exceptions (see below), instead of all three;
    • Limiting the scope of the exception(s) that are authorised, for example:
      • (For exception 2) deleting the “genuine and foreseeable” clause which allows speculative use;
      • (For exception 3) increasing the threshold for searching for suspects from those accused of a crime with a minimum 4-year sentence to those with a minimum 6-year sentence;
      • (For exception 3) deleting some of the crimes from the list of permitted crimes in Annex II.* Although lawmakers may argue that all of these are serious enough to justify the use of real-time RBI, those crimes that have less of an immediate urgency for catching the perpetrator might be easiest to argue should be excluded: sabotage, drug trafficking, organ trade, environmental crime etc.
  • So long as the authorising law is not passed, real-time RBI in publicly-accessible spaces will not be allowed by police, so stalling the lawmaking process to implement a national authorising law can also be a somewhat effective (albeit not sustainable) tactic;
  • In the event that national authorising laws are passed, their constitutional legality and/or compatibility with EU or European human rights law could still be challenged in national courts. For the latter (challenge on the basis of EU/European law), this would be able to set important precedent for other Member States in the event that the challenge is successful.

The following use cases are the maximum exceptions allowed by AI Act (Article 5.1.h). Each of these has wording that, if used carefully, could potentially also be used to argue that any mass use is banned:

The word “targeted” is ambiguous here, but may offer opportunities for us to push back against public deployments of real-time RBI by arguing, as the Italian data protection authority has already confirmed, that public-space uses can never be targeted. The AI Act also requires this use to be for “specific” victims, meaning that preventative or anticipatory uses would not be allowed;

  • Whilst we have a lot of concerns about how broad this exception is, the wording may still offer opportunities for us to contest the use of permanent biometric surveillance infrastructures (e.g. biometric analysis-equipped CCTV cameras, rather than temporary or mobile cameras/devices). This is because specific threats or attacks are by definition limited in time and scope — whereas mass infrastructures are designed for widespread use;
  • This is backed up by some wording in one of the recitals: any use “should be limited to what is strictly necessary concerning the period of time, as well as the geographic and personal scope” (Recital 34);
  • These specific offenses must have a threshold for seriousness (at least 4 years sentence if convicted), and are permissible only so long as this search is within the bounds of an investigation or prosecution. This means that any sort of preventative use (i.e. using real-time RBI in a city center just in case a suspect goes there), or without very strong grounds to believe the person being searched for is in a particular area, would not be permitted. Recital 34 complements this interpretation, too;
  • The rules also state that each specific criminal offense permitted (a maximum of (Article 5.5.) must be specifically listed if it is to be allowed, and therefore also can be excluded;

In addition, all of these exceptions can only be to used “confirm the identity of the specifically targeted individual” (Article 5.2.), which offers two further opportunities:

  • The phrasing “confirm[ing]” an identity implies that the suspect’s identity must already be known and the system can only be used to confirm this – so fishing expeditions would never be allowed. Emeritus Professor of International Law, Douwe Korff, supports this interpretation on the basis of case law of the Court of Justice of the EU, specifically the PNR (Passenger Name Records) case;
  • This phrasing also prevents the identification of anyone other than the specifically targeted individual. Therefore it would be unlawful for police to identify anyone else using the system.

Reference: Douwe Korff, ‘WORKING NOTE ON THE PERMISSIBILITY OF THE USE OF “REAL TIME” REMOTE BIOMETRIC IDENTIFICATION IN PUBLICLY ACCESSIBLE PLACES BY LAW ENFORCEMENT AGENCIES UNDER THE EU ACT AS CURRENTLY DRAFTED’, 2024, available on request.

The list of crimes is: terrorism, trafficking in human beings, sexual exploitation of children, and child pornography, illicit trafficking in narcotic drugs or psychotropic substances, illicit trafficking in weapons, munitions or explosives, murder, grievous bodily injury, illicit trade in human organs or tissue, illicit trafficking in nuclear or radioactive materials, kidnapping, illegal restraint or hostage-taking, crimes within the jurisdiction of the International Criminal Court, unlawful seizure of aircraft or ships, rape, environmental crime, organised or armed robbery, sabotage, participation in a criminal organisation involved in one or more of the offences listed above.

Advocacy opportunity #6

Make complaints on the basis of a rights-forward interpretation of the exception(s) (FUTURE)

(Return to top)

As in opportunity #2, we can complain to the national market surveillance authority or DPAs. Here, we could do that against uses that Member States claim are within the lawful exception, but which we argue are not compatible with a rights-based interpretation and on the basis of case law (see also opportunity #8). Based on the exceptions, this could be, for example, against:

A use under exception 1 that is used to scan passers-by against a database of missing people (rather than an individual image of the person being sought).

The roll-out of biometric identification infrastructure, or the equipping of CCTV cameras with facial recognition capability, in anticipation of a threat under exception 2 but without a specific threat being present.

A use under exception 3 that scans every person in a space, including an innocent person, whose rights we could argue have been infringed — not just the suspect.

When implementing any of the exceptions, the Member States also have to follow a series of additional rules, some of which are meaningful, such as the mandatory notification to the European Commission of national authorising laws, ; whereas others amount to deeply-concerning loopholes. Those include, inter alia, things like weak authorisation processes, disclosing uses to competent authorities but giving them very limited information and oversight powers (because this can be done retrospectively, and without disclosing “sensitive operational data”), and reporting on only aggregate usage to the European Commission. This is all outlined in Article 5.2. to 5.8. We also want to point out that none of these safeguards can prevent the essential issue that EU governments would be able to conduct biometric mass surveillance practices. This is therefore one of the most disappointing parts of the AI Act as far as the Reclaim Your Face campaign is concerned.

Fortunately, even the weakest of these procedural rules have silver linings. This is because despite their weak wording, the EU Member States have the possibility to be more restrictive/limited in how they implement them, but are not allowed to remove or weaken safeguards: “Member States may introduce, in accordance with Union law, more restrictive laws on the use of remote biometric identification systems.” (Article 5.5.)

Advocacy opportunity #7

Strengthen the national procedures to get as close to a ban as possible (REACTIVE ONLY)

(Return to top)

  • For each use of the real-time RBI system, the police have to quantify the ludicrous risk of not using the system: “the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm that would be caused if the system were not used;” (Article 5.2.(a));
    • Unfortunately, this seems to be a requirement for any use due to the use of the word “shall”, (thereby removing Member State discretion to be more restrictive in their national authorising law), but at least has to be balanced against the consequences on rights and freedoms of anyone that could be impacted (Article 5.2.(b)). Given that there are many other methods at the disposal of police other than facial recognition, and that the impact on the whole population can be very severe (e.g. as established in the landmark Census judgement and in a wide range of EU case law), this could be used to argue that no use will ever be justified;
  • The law allows the police to waive the requirements to conduct a prior fundamental rights impact assessment and registration of the system in the database “in duly justified cases of urgency” (Article 5.2.). This can be deleted in a national authorising law;
  • The law allows the police to waive the requirement for judicial or administrative authorisation “in a duly justified situation of urgency” (Article 5.3.). What’s more, in some EU Member States, administrative authorisation can be granted by a police body, making this a case of marking their own homework. This urgency exception can be deleted in a national authorising law, and the authorising body can be required to be an “independent judicial authority” only;
  • It is possible also that national rules on the 3rd party conformity assessments could be added, for example to specify more clearly the 3rd parties that can be allowed to do this, or require additional transparency or oversight;
  • Please note that this list is not exhaustive, and will be added to as our analysis of the Act continues.

The analysis above shows that we have the chance to push for a much more limited and strict interpretation of the exceptions to the ban on real-time RBI. Whilst this wouldn’t necessarily mean a full ban in law, we could use it to get as close as possible to a de facto ban – meaning that the threshold for use is so high, the conditions so narrow and the rules so strict, that there are (almost) no time when police forces would actually be able to use the system in practice.

Advocacy opportunity #8

Monitoring of procedures, interpretations etc. (FUTURE)

(Return to top)

We have the possibility to collaborate with journalists, researchers, data protection authorities, national human rights bodies and others to monitor compliance, for example on the following:

  • Checking that the authorisations granted are genuinely “strictly necessary” (Article 5.3.);
  • Ensuring that authorising bodies are acting independently and not just saying “yes” to everything;
  • Investigating potential abuses;
  • etc.

We should also keep a close eye on possible loopholes and issues that might arise, for example:

  • Korff notes that the existence of Regulatory AI sandboxes (Article 57.1) might create a loophole whereby authorities can ‘test’ real-time RBI systems under “real-world conditions” without having conducted a fundamental rights impact assessment or registering the system in the public database, meaning that such a deployment could be done in completely opaque conditions. Further research is needed into this issue, but this concerning interpretation is supported by the concerns about the final AI Act from the Austrian government. If confirmed, the LED could provide a way to contest this, because such a use case would still need to be done in consultation with the relevant national data protection authority (DPA);

Advocacy opportunity #9

Make use of case law and litigation opportunities (NOW / FUTURE)

(Return to top)

Even once the AI Act is in force, the EU’s landmark data protection rules – the GDPR and LED – will continue to be really important. These laws are backed up by a lot of case law which emphasises, for example, the extreme sensitivity of the processing of biometric data (CJEU case C205 21), as well as a series of decisions by national DPAs to stop police from using biometric data in certain ways. These judgements and decisions still apply, and the AI Act also emphasises that biometric processing that was illegal already under the GDPR or LED remains just as illegal after the AI Act.

This can be used as an argument against national authorising laws – for example with the threat that these practices may still be deemed illegal.

Even if member states implement broad authorising laws, these arguments can still help us to push back against real-time RBI by police. That’s because – to date – no EU DPA has expressly allowed the police use of real-time RBI, generally considering it to be already illegal under the LED. We could, therefore, explore litigation opportunities to push an interpretation of the LED which does not allow any real-time use of RBI. The AI Act itself recognises, in a recital, that real-time RBI comes with serious and severe threats to fundamental rights (Recital 32).

What’s more, just because an EU Member State has passed a national authorising law for the use of RBI by police, that doesn’t necessarily mean the practice is compatible with rights such as privacy, consumer rights, data protection, dignity, free expression and association, non-discrimination and other laws in both the European and the international human rights framework. Fighting back against these laws in national courts, and ultimately either the European Court of Human Rights (ECHR) or the Court of Justice of the EU (CJEU) remains possible – and could set binding legal precedent which would stop other EU Member States from authorising the exceptions. See opportunity #5 for additional reflections on national constitutional challenges.

RBI for the purpose of national security

(Return to top)

There is another deeply worrying part of the AI Act which could impact the ban on real-time RBI. The Act contains a get-out-of-jail-free card for countries to use banned AI systems of any kind if they invoke the broad claim of ‘national security’, as well as for developers to create AI systems for claimed ‘national security’ purposes, without following the AI Act’s rules (Article 2.3.). As ECNL have explained, this goes against the EU’s treaties, which do not allow blanket security exceptions.

The main opportunities we see here are either traditional watchdog-like activities (journalists’ exposing harmful or abusive uses on the basis of national security; NGOs, journalists and other civil society actors taking a monitoring or investigative role over national security uses) or to litigate to get this part struck from the AI Act.

RBI use by other actors (non-police authorities, private entities)

(Return to top)

The AI Act does not ban the use of real-time remote biometric identification in publicly-accessible spaces by actors other than the police. This is explained in recital (39):

In the application of Article 9(1) of Regulation (EU) 2016/679 [the General Data Protection Regulation (GDPR)], the use of remote biometric identification for purposes other than law enforcement has already been subject to prohibition decisions by national data protection authorities.”

This tells us that the AI Act already considers any type of remote biometric identification, whether live or retrospective, to be banned from use by state actors other than police (e.g. local authorities) as well as by all private/commercial entities (e.g. supermarkets, casinos).

This is really important, because it shows a clear and unambiguous interpretation of rules established in the GDPR. It emphasises that the exceptions to the ban on processing biometric data in Article 9 of the GDPR can never be used for remote biometric identification (meaning that which is done at a distance, in a way that is likely to scan many people at once). This implies, too, that – as we have long argued – people can never truly consent to such a use, nor can their be a substantial public interest in such uses, despite companies and authorities claiming that they have consent or a substantial public interest in the use.

Part 3: Post (retrospective) remote biometric identification (RBI)

Coming July/August 2024

This guide was written by Ella Jakubowska, Head of Policy at EDRi

With enormous thanks and gratitude to the EDRi network and Reclaim Your Face campaign