EU privacy regulators and Parliament demand AI and biometrics red lines

In their Joint Opinion on the AI Act, the EDPS and EDPB “call for [a] ban on [the] use of AI for automated recognition of human features in publicly accessible spaces, and some other uses of AI that can lead to unfair discrimination”. Taking the strongest stance yet, the Joint Opinion explains that “intrusive forms of AI – especially those who may affect human dignity – are to be seen as prohibited” on fundamental rights grounds.

By EDRi · July 14, 2021

The month of June brought exciting leaps forward for the digital rights community, as a series of important actors called for stronger red lines on harmful uses of artificial intelligence (AI). For over a year EDRi has been advocating for such limitations to guarantee respect for fundamental rights in the development and use of AI technologies. Our demands are becoming increasingly prominent and were a key feature of the Artificial Intelligence Act (AI Act), proposed in April 2021. However, it wasn’t all good news, as the AI Act’s prohibitions were vague, full of loopholes and therefore must be strengthened to ensure that this new law is fully able to protect people’s rights. In EDRi’s first analysis, we argued it needs major changes to prevent discrimination and mass surveillance.

We weren’t the only ones disappointed. Three important European Union (EU) institutions – the European Parliament, the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) – issued official reports echoing EDRi’s main critiques. Coupled with the push for red lines earlier this year from 116 Members of the European Parliament, as well as demands from EDRi alongside 61 digital and human rights organisations, these developments create a compelling argument for strong AI red lines across Europe.

What does it mean to have AI red lines?

Red lines are strict legal limits on uses of technologies that violate fundamental rights. If we want to have a just and democratic society in which technology truly works for all, there are some practices that are simply unacceptable. This includes deploying AI for experimenting on migrants and asylum seekers; ‘predicting’ acts of criminality before they have happened; scoring people to control their access to welfare; or surveying populations through biometric mass surveillance practices. These use cases employ vast amounts of (often highly sensitive) data to make judgments, predictions or decisions about people, often with vast consequences on their daily lives. In particular, these uses of AI are likely to exacerbate structural oppression and inequality, as well as giving a false objectivity to discriminatory trends and making them harder to contest. The call for red lines seeks to counteract this, pre-empting and preventing AI-based harms to people and communities.

EDPS and EDPB Joint Opinion criticises AI Act for weak red lines

Together, the EDPS and the EDPB are the most senior and influential data protection regulators in the EU. They are responsible for keeping EU institutions (such as the Commission), Member State governments/authorities (including law enforcement agencies) and companies in check when it comes to respecting and protecting people’s rights to data protection and privacy, as well as other fundamental rights in the context of data processing.

In their Joint Opinion on the AI Act, the EDPS and EDPB “call for [a] ban on [the] use of AI for automated recognition of human features in publicly accessible spaces, and some other uses of AI that can lead to unfair discrimination”. Taking the strongest stance yet, the Joint Opinion explains that “intrusive forms of AI – especially those who may affect human dignity – are to be seen as prohibited” on fundamental rights grounds. Specifically, the Joint Opinion calls for the most harmful AI use cases, many of which the Commission only defined as “high risk” (meaning that they are only subject to a series of disappointingly weak safeguards) should instead be banned. This includes all forms of social scoring, and AI systems to assess risk of offending or reoffending.

When it comes to remote biometric identification in publicly accessible spaces – which we call “biometric mass surveillance” – the EDPS and EDPB were critical of the AI Act. Just as EDRi has argued, the Joint Opinion sets out that such practices pose such an extreme and irreversible threat, that any type of “automated recognition of human features” should be banned in public spaces – crucially, including online ones.

By contrast, the Commission’s AI Act put in place so many caveats in the so-called ‘prohibition’ on police uses of remote biometric identification that it can scarcely be called a prohibition. EDPS and EDPB call these exceptions “flawed”, outlining that instead all actors (both public and private) should be subject to a “general” ban on these practices. This must also include prohibiting the biometric categorisation of sensitive characteristics like gender, sexuality and ethnicity, as well as so-called “emotion recognition” applications (neither of which do the AI Act currently even specifies as high risk).

Further, this Joint Opinion endorses EDRi’s argument that – whilst existing European data protection laws do indeed provide a strong basis for protecting people from abuses of their biometric data – there is an urgent need for additional, complementary laws to close existing loopholes. From their position on the EU’s front lines, the EDPB and EDPS are well-informed to make this recommendation that the AI Act creates additional rules to fully protect people from harmful uses of AI.

The LIBE report on Artificial intelligence in criminal law

Within the European Parliament, the Civil Liberties, Justice and Home Affairs Committee (LIBE) recently passed a new report on “Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters”. Whilst the report is an ‘own initiative report’ (meaning that it is not a new law) it gives a strong political signal about the Parliament’s position on biometrics and AI, and could influence the debate on the AIA.

The direction of political travel is promising. Increasingly, politicians recognise that setting regulatory restrictions is crucial to preserving democracy, the rule of law and avoiding a techno-solutionist free-for-all.

In this new report, the LIBE Committee takes a strong stand against AI technologies leading to unnecessary and disproportionate mass surveillance. They suggest instead that AI technologies should be subject to strict democratic oversight and controls – and state their firm opposition to predictive policing, whereby law enforcement attempt to predict individual future criminality. This is an important step, as predictive policing practices by definition embed structural and historical discrimination, and violate the presumption of innocence and due process rights.

On the question of biometric mass surveillance, the Committee suggest a series of red lines for law enforcement, proposing: a permanent prohibition on the automated analysis of non-facial human features (like gait or voice); a ban on the use of private facial recognition databases like those sold by the notorious Clearview AI; and a moratorium (time-limited ban) on facial recognition identification. While EDRi continues to push for a permanent ban on all such practices in the AI Act, it is notable that the LIBE Committee stipulates the possibility that the moratorium could translate into a full ban, should law enforcement be unable to prove that such practices can happen in compliance with fundamental rights. EDRi’s analyses have demonstrated that this will never be the case.

Next steps for AI regulation

Whilst these developments represent a positive step forward, people’s digital rights in the EU are far from being out of the woods. Given growing trends such as the global digitalisation of welfare and the rise of so-called “smart cities” (read: surveillance cities), we know that many governments are hesitant to put people’s rights ahead of opportunities to control or profit from populations.

Over the next few years, the European Parliament and the Council of the European Union (the group which represents the twenty-seven Member States) will be negotiating the contents of the AI Act proposal before the proposal becomes law.

What can you do?

The European Parliament will put forward its proposed amendments to the Regulation starting in the autumn, for now to be led by the IMCO committee. To follow the legislative developments and EDRi’s material on this file, have a look at EDRi’s document pool on artificial intelligence.

Our campaign, Reclaim Your Face, is pushing the EU institutions to implement the strongest protections for our fundamental rights by banning biometric mass surveillance practices. Your voice can help us argue for this, and other AI red lines by signing our official European Citizens’ Initiative.

The article was published on 14 July 2021.

Image credit: Sona Manukyan/Flickr (CC BY-NC 2.0)

(Contribution by:)

Ella Jakubowska

Policy Advisor

Twitter: @ellajakubowska1


Sarah Chander

Senior Policy Advisor

Twitter: @sarahchander

Sign the European Citizens’ Initiative (ECI)

If you're an EU citizen, you can help us change EU laws by signing the official #ReclaimYourFace initiative to ban biometric mass surveillance practices:

This is not a regular petition, but an official “European Citizens’ Initiative” (ECI) run by EDRi on behalf of the European Commission. This means your signature must be officially verified by national authorities, according to each EU country’s specific rules. We cannot control the data that they require, since it is required by Regulation (EU) 2019/788 on the European citizens’ initiative for the purpose of confirming your signature. We can only use the information that you provide in Step 2 to contact you with updates, if you choose to enter it. Furthermore, our ECI signature collection system has been verified by the German Federal Information Security Office (BSI) to ensure it is compliant with the EU’s Regulation on ECIs. Please see our “Why ECI?” page for further details, and check out our privacy policy.

This ECI is open to all EU citizens, even if you currently live outside the EU (although there are special rules for Germany). Unfortunately if you are not an EU national, the EU’s official rules say that you cannot sign. Check other ways than non-EU citizens can help the cause.

Note to German citizens: It is possible to sign our ECI petition if you live outside the EU, but German rules mean that for German citizens specifically, your signature will only be valid if you are registered with your current permanent residence at the relevant German diplomatic representation. If you are not registered, then unfortunately your signature will not be counted. You can read more information about the rules. This rule does not apply to citizens of any other EU country.

Legally, if we reach 1 million signatures (with minimum thresholds met in at least 7 EU countries) then the European Commission must meet with us to discuss our proposal for a new law. They must then issue a formal communication (a piece of EU soft law) explaining why they are or are not acting on our proposal, and they may also ask the European Parliament to open a debate on the topic. For these reasons, a European Citizens’ Initiative (ECI) is a powerful tool for getting our topic onto the EU agenda and showing wide public support for banning biometric mass surveillance practices.