How to fight Biometric Mass Surveillance after the AI Act: A legal and practical guide

The EU's Artificial Intelligence Act has been adopted, laying out an in-principle ban on live mass facial recognition and other public biometric surveillance by police. Yet the wide exceptions to this ban may pave the way to legitimise the use of these systems. This living guide, for civil society organisations, communities and activists, charts a human rights-based approach for how to keep resisting biometric mass surveillance practices now and in the future

By EDRi · May 27, 2024

Note: this is a dynamic and evolving piece of work which we expect to change over time as our assessment of the final AI Act matures, and as new opportunities (and challenges) arise. Any changes made after the initial publication of this guide will be clearly marked.


Additional sections on high-risk biometrics; biometric categorisation; emotion recognition; biometric systems in border and migration contexts; biometrics definitions; and how to make the use of a global precedent for a ban will also be added in due course.


The future of the EU’s fight against biometric mass surveillance

(Return to top)

Throughout spring 2024, European Union (EU) lawmakers have been taking the final procedural steps to pass a largely disappointing new law, the EU Artificial Intelligence (AI) Act.

This law is expected to come into force in the summer, with one of the most hotly-contested parts of the law – the bans on unacceptably harmful uses of AI – slated to apply from the beginning of 2025 (six months and 20 days after the legal text is officially published).

The first draft of this Act, in 2021, proposed to ban some forms of public facial recognition, showing that lawmakers were already listening to the demands of our Reclaim Your Face campaign. Since then, the AI Act has continued to be a focus point for our fight to stop people being treated as walking barcodes in public spaces.

But after a gruelling three-year process, AI Act negotiations are coming to an underwhelming end, with numerous missed opportunities to protect people’s rights and freedoms (especially for people on the move) or to uphold civic space.

One of the biggest problems we see is that the bans on different forms of biometric mass surveillance, or BMS, are full of holes. BMS is the term we’ve used as an umbrella for different methods of using people’s biometric data to surveil them in an untargeted or arbitrarily-targeted way – which have no place in a democratic society.

At the same time, all is not lost. As we get into the nitty-gritty of the final text, and reflect on the years of hard work, we mourn the existence of the dark clouds – yet we also celebrate the silver linings and the opportunities they create to better protect people’s sensitive biometric data.

Whilst the AI Act is supposed to ban a lot of unacceptable biometric practices, we’ve argued since the beginning that it could instead become a blueprint for how to conduct BMS.

As we predicted, the final Act takes a potentially dystopian step towards legalising live public facial recognition – which so far has never been explicitly allowed in any EU country. The same goes for pseudoscientific AI mind-reading systems, which the AI Act shockingly allows states to use in policing and border contexts. Using machines to categorise people’s gender and other sensitive characteristics, based on how they look, is also allowed in several contexts.

We have long argued that these practices can never be compatible with our fundamental rights to dignity, privacy, data protection, free expression and non-discrimination. By allowing them in a range of contexts, the AI Act risks legitimising these horrifying practices.


Yet whilst the law falls far short of the full ban on biometric mass surveillance in public spaces that we called for, it nevertheless offers many points to continue our fight in the future. To give one example, we have the powerful opportunity to capitalise on the wide political will in support of our ongoing work against BMS to make sure that the AI Act’s loopholes don’t make it into national laws in EU member states – with strong indications of support already in Austria and Germany.

The below legal and practical guide on ‘How to Fight Biometric Mass Surveillance after the AI Act’ is intended to inform and equip those who are reflecting and re-fuelling for the next stage in the fight against BMS. This includes charting out more than a dozen specific advocacy opportunities including formal and informal spaces to influence, and highlighting the parts of the legal text that create space for our advocacy efforts.

We also remind ourselves that whilst the biometrics bans have been dangerously watered down, the Act nevertheless accepts that we must ban AI systems that are not compatible with a democratic society. This idea has been a vital concept for those of us working to protect human rights in the age of AI, and we faced a lot of opposition on this point from industry and conservative lawmakers.

This legal and normative acceptance of the need for AI bans has the potential to set an important global precedent for putting the rights and freedoms of people and communities ahead of the private interests of the lucrative security and surveillance tech industry. The industry want all technologies and practices to be on the table – but the AI Act shows that this is not the EU’s way.

Executive summary

This guide details numerous opportunities to fight biometric mass surveillance

(Return to top)

  1. From early 2025, some uses of real-time remote biometric identification (RBI) by police in publicly-accessible spaces; emotion recognition in workplaces and education settings; certain types of biometric categorisation; and all scraping of the internet to get facial images for biometric databases will be banned in the EU;
  2. EU Member States can extend this ban at any time in order to fully ban the use of real-time (live) RBI in publicly-accessible spaces for the purposes of law enforcement. They could also choose to extend this to all actors (i.e. including private actors and state actors other than police);
  3. Post (retrospective) RBI for law enforcement purposes (whether in public or non-public spaces) is not banned, but has to follow additional procedural rules which will apply from around mid-2026. Member States can only use these systems if they have updated their national laws to meet the AI Act’s conditions, meaning that these practices are de-facto banned until that point. Like with real-time use, Member States can also extend this to further restrict or fully ban the use of post (retrospective) post remote biometric identification for the purposes of law enforcement. They could also choose to extend limits/bans to all actors (i.e. including private actors and state actors other than police);
  4. Even in the worst case scenario – where Member States instead authorise real-time and post RBI use by police – there are still opportunities to push for stronger interpretations, to litigate against the authorisation on the basis of it still entailing severe human rights infringements, and to lodge formal complaints with Data Protection Authorities (DPAs) or with AI Act authorities. Collectively, all these things could contribute to further limiting or stopping biometric mass surveillance practices in the EU;
  5. The use of either real-time or post RBI by any actor other than police (i.e. private actors, state authorities other than police) is considered already prohibited by the AI Act. If passing national bans on police, Member States could choose to make this explicit, but already the AI Act gives us a clear basis to contest any such uses.

This, and all further analysis in the Guide, is based on 2021_0106(COR01), which is the final version of the Act that will become EU law. Once the final version is published in the EU’s Official Journal, we will update this page with a link to the official version of the law.

This guide builds on the ideas, work and expertise of the EDRi network, in particular its biometric working group and AI core group, the Reclaim Your Face campaign, academics, and several lawmakers and advisors in the European Parliament and European Commission in the past several years.

Where specific work has been used, this is referenced, but we also want to expressly thank Karolina Iwańska, Douwe Korff and Plixavra Vogiatzoglou for their insights.

We also want to highlight the long-running biometrics/AI Act collaboration with and contributions from Access Now (Daniel Leufer and Caterina Rodelli), Algorithm Watch (Nikolett Aszodi and Kilian Vieth-Ditlmann), European Disability Forum (Kave Noori), Amnesty International (Mher Hakobyan), Bits of Freedom (Lotte Houwing and Nadia Benaissa), IT-Pol (Jesper Lund), ECNL (Karolina Iwańska), La and Algorights (Judith Membrives), Hermes Center (Alessandra Bormioli and Claudio Agosti) and ARTICLE 19 (Vidushi Marda). Without their work, dedication and expertise, this guide would not exist.

Part 1

The AI Act framework

(Return to top)

Key terminology

Remote biometric identification, the term chosen in the AI Act to refer to uses of biometric identification systems like facial recognition, but could use any biometric feature, and which have the potential to scan many people at once. However, this term is not defined precisely in the AI Act, nor elsewhere, leaving room for interpretation

The ban on real-time RBI by police is related to public spaces only. Whilst in general the Act defines this broadly, which we support, it excludes borders and migration detention centers, which we strongly oppose

A live use of an RBI system

A retrospective use of an RBI system

As noted by emeritus Professor Douwe Korff, the concept of law enforcement purposes is used in a worryingly broad way in the AI Act to potentially include other state and non-state actors who have been empowered to carry out a law enforcement task. Korff notes that this could include, for example, banks when carrying out money laundering checks

To understand how the biometric prohibitions in the AI Act work, we need to understand the over-arching framework of the whole Act. Broadly speaking, the EU’s AI Act sorts uses of AI systems into six categories, and then applies a set of specific rules accordingly.

Four explicit categories:

Article 5: “Prohibited AI Practices”

These are specific uses of AI systems that pose an unacceptable risk to people’s fundamental human rights or safety are banned. These systems can be found in Article 5. Relevant prohibitions on biometric mass surveillance practices:

However, it is important to note that there are three exceptions to this ban. If police want to use one of these exceptions, the use is considered high-risk, and will additionally have to follow extra biometrics-specific rules

Namely workplace and education institutions, except if it’s for “medical or safety reasons”, a concerning loophole

Such biometric categorisation systems are banned, but with the exclusion of gender, gender identity, health status and disability, and with an additional exception for police using these tools for “labelling or filtering”

The scraping of the internet or of CCTV footage by AI systems in order to build or expand facial recognition databases – like the notorious Clearview AI software – is banned (Art. 5.1.(e))

Article 6 and Annex III: “High-risk AI systems”

According to Article 6, AI systems that pose a high risk to people’s rights or safety have to follow a series of steps and rules, including on things like data quality, documentation and fundamental rights impact assessments (FRIAs). These rules constitute the main part of the AI Act, which in the initial European Commission impact assessment were predicted would apply to between 5% and 15% of all AI systems on the EU market. The systems that are considered high risk are set by the “areas” listed in Annex III of the text.

However, Article 6.4. allows those developing or selling the AI system (“providers”) to self-assess whether they fall under Annex III, and there is no systematic check of their assessment. The conformity assessment for high-risk AI systems (proof of compliance with the AI Act) is also self-assessed, with the exception of biometric systems.

  • Annex III, the annex of high-risk areas, includes any remote biometric identification (real-time or post), emotion recognition or biometric categorisation systems that aren’t ruled out by one of the prohibitions. This means that “post” (retrospective) police use of remote biometric identification systems are always high risk. Any actor developing, deploying or using these systems therefore needs to follow all the rules in Chapter III of the AI Act;
  • Whilst for all other high-risk AI systems, the conformity assessment (proof of compliance with the AI Act) is self-assessed, this is not the case for “AI systems intended to be used for biometrics” (Recital 125), which must have a third-party “testing, certification and inspection” (definition in Article 3.(21));
  • In addition, any use of real-time (live) remote biometric identification by police in publicly-accessible spaces, and which is based on one of the listed exceptions the RBI prohibition, has to follow several additional rules compared to other high-risk systems. These extra rules for the use of real-time RBI are laid out in Article 5.2. and explained in more detail in Part 2 of this guide.

Article 50: Certain AI systems”

A small number of systems (e.g. “synthetic … content”, also known as deepfakes) are subject to some small additional transparency requirements in Article 50. This includes things like making generated content “detectable as artificially generated or manipulated” (Art. 50.2. and 50.4.), although there are exceptions to this rule. Whilst these systems described in Article 50. have sometimes been referred to as low-risk systems, the text does not actually describe them as such. There are also specific rules for emotion recognition (ER) and biometric categorisation (BC) systems when such uses aren’t covered by the ban (e.g. biometric categorisation on the basis of gender, which is not banned). The use of these ER and BC systems also has a specific requirement to “inform” anyone who is exposed to the use of such a system, except when the system is used for law enforcement purposes.

Articles 53 – 55: “General purpose AI [GPAI] models”

Articles 53 to 55 lay out specific rules for general purpose AI models like ChatGPT, including keeping up-to-date technical records and documentation and having policies for compliance with copyright law. Where general-purpose AI systems pose a “systemic risk”, there are enhanced rules including on testing and cybersecurity. Further guidelines on GPAI are also going to be developed by the EU AI Office.

One implicit category:

Negligible risk systems

These systems – the majority of AI systems that will be on the EU market – do not have to follow any of the AI Act’s rules (but of course must still follow other applicable rules, such as on data protection or product safety).

Providers of these systems – the majority of those in the EU – are instead encouraged to adhere to voluntary codes of conduct and other voluntary ethical rules which will be developed.

And a biometrics-specific category:

Post-remote biometric identification

All forms of retrospective RBI, regardless of who is using the system, are considered high risk, and therefore have to follow the rules mentioned above, as well as undergoing a third-party conformity assessments. What’s more, there are additional provisions listed in Article 26. for any use of RBI that happens after the fact (post) – most commonly by applying biometric identification algorithms to CCTV or other video footage. These additional rules include things like requiring authorisation, and for the identification to be “targeted”.

In order to implement the AI Act’s rules, the European Commission, supported by several newly-created fora and bodies, have to develop guidelines and other procedures for how to interpret them. These guidelines will include how Member States should interpret the prohibitions and the high-risk systems, which makes them very important.

Advocacy opportunity #1

Influence guidelines, codes of practice and other EU-level processes (NOW)

(Return to top)

Non-exhaustively, some of the bodies and processes which will be most relevant for the fight against BMS include:

  • The European Commission: the Directorate-General for Communications Networks, Content and Technology (DG Connect) is responsible for delivering key guidelines on the implementation and interpretation of the AI Act (Article 96). NGOs already advocating at EU level can include attempts to influence the drafting of these guidelines into their advocacy work, or via national channels with influence on the EU (e.g. telecommunications ministries). In addition, as official Commission documents, all draft guidelines will be presented for a public consultation, creating a formal opportunity for civil society to provide our feedback;
    • Most important from a biometrics perspective will be the part on prohibitions (Art. 96.1.(b)) and on high-risk AI systems (Article 6.5.). This will include a list of “practical examples” of systems that are considered high risk and those that are not, which will be very influential on future protections (or lack of);
    • Subsequently, the European Commission will also develop templates for Member States to report about their use of RBI, which could be important for us to try to influence (Article 5.4.);
  • The AI Office: the recently-established European AI Office, established under Article 64, will be made predominantly of existing staff from the European Commission’s Directorate-General (DG) Connect, e supplemented by recruited experts in technology and policy, and complemented with participation from other stakeholders. They will create, or support creation of, codes of practice (Article 56), codes of conduct (Article 95) and other tools, which will be important for ensuring a strong and coherent application of the AI Act’s rules. You can express interest in staying up to date in opportunities for civil society input, and see the latest about the Office and its Advisory Forum;
  • The Scientific Panel: the AI Office and the Commission will be supported by a Scientific Panel, which represents an opportunity for technologists to ensure robust and evidence-based implementing rules for the AI Act. We also want to push for this panel to include social sciences expertise, which will be critical for a rights-protective approach to AI, and should include this as key advocacy messages in our work on the AI Act;
  • The EU AI Board / ‘the Board’: the Act Article 65 also creates a formal European Artificial Intelligence Board, made up of one representative from each EU member state, with observership for the European Data Protection Supervisor, and attendance in meetings by the AI Office. This board oversees the enforcement of the AI Act. The work of civil society can highlight issues to them or call on them to react to certain developments / areas of concern at EU or national level;
  • The Advisory Forum: The AI Board and the European Commission will be supported in their work by an Advisory Forum, Article 67, including industry/commercial entities, academia, certain representatives of the EU member states, standard-making bodies, other EU agencies, and civil society. This is a formal opportunity to have a seat at the table, as well as an opportunity to advocate towards a rights-respecting implementation of the AI Act. As well as getting a formal seat through civil society, those who want to influence can also seek a seat through existing participation in standard-setting bodies, in particular CEN-CENELEC;
  • Competent authorities: each country will set up an authority, known as a ‘market surveillance authority’ to enforce the AI Act. This may be a part of an existing authority or it might be completely new. They will work with the country’s national human rights authority to ensure compliance with the AI Act. Along with existing data protection authorities, these authorities will be very important for getting the most rights-protective outcomes from the AI Act, so already we can advocate nationally to make sure that the authorities appointed for this role are independent, credible and have strong fundamental rights expertise.

Part 2: coming end May 2024!

This guide was written by Ella Jakubowska, Head of Policy at EDRi

With enormous thanks and gratitude to the EDRi network and Reclaim Your Face campaign