Will MEPs ban Biometric Mass Surveillance in key EU AI Act vote?

The EDRi network and partners have advocated for the EU to ban biometric mass surveillance for over three years through the Reclaim Your Face campaign. On May 11, their call may turn into reality as Members of the European Parliament’s internal markets (IMCO) and civil liberties (LIBE) Committees vote on the AI Act.

By Reclaim Your Face (guest author) · May 9, 2023

The EDRi network, along with our wide range of partners in the Reclaim Your Face campaign, have advocated for over three years for the EU to ban biometric mass surveillance practices – and now, our call is one step closer to becoming a reality.

Warning of the severe chilling effects on free expression, the threats of discrimination, and the wholesale violation of our democratic right to privacy, tens of thousands of people in the EU have joined us to reject these mass surveillance systems that treat us all as walking barcodes.

Thursday 11th May 2023 promises to be a historic moment, as Members of the European Parliament’s internal markets (IMCO) and civil liberties (LIBE) Committees vote on whether they are willing to put human rights and freedoms first – or if they will prioritise surveillance, control and profits. In line with an October 2021 vote where MEPs across the political spectrum promised to ban biometric mass surveillance, we are counting on them to do the right thing once again.

Banning biometric mass surveillance: what does it look like?

As highlighted by the UN’s human rights chief, the use of biometric technologies pose a serious threat to global human rights. In the context of the AI Act, there are several specific practices that together constitute biometric mass surveillance, and must be stopped. So we call on human rights champions in the LIBE and IMCO committees to ensure the following amendments to the AI Act are approved:

1) The AI Act must prohibit all forms (real-time and post) of remote biometric identification (RBI) in publicly accessible spaces [vote + on CA 11A]

  • The use of these systems in our public spaces, for example facial recognition surveillance in public streets, put us all in a virtual line up and treats us as potential suspects without cause;
  • These practices chill our free expression, and make it harder to access almost all our other human rights: to protest, to access healthcare, to enjoy religious freedoms, to live free from discrimination and more;
  • It doesn’t matter whether they happen live (in ‘real-time’) or retrospectively (‘post’). The chilling effect, and other negative consequences on human rights, are just as severe.

2) The AI Act must prohibit biometric categorisation on the basis of sensitive characteristics, emotion recognition, and the mass scraping of CCTV feeds or social media for compiling mass biometric databases [vote + on CA 11]

  • These practices use our faces and bodies against us, predicting and profiling our most sensitive characteristics: our gender, ethnicity, age, even whether we appear to be happy, angry or suspicious;
  • Not only is this deeply invasive and disproportionate, but it is based on flimsy pseudo-science that just does not hold up to scrutiny. It is vital that the EU does not legitimise such dysoptian practices!
  • The mass scraping of our faces from the internet or from CCTV footage by companies like Clearview AI has already been stopped by several EU data protection authorities. The AI Act is a key opportunity to put a stop to it, for all the EU’s residents, once and for all.

3) The AI Act must make all other forms of biometric identification high risk (Annex III) [vote + on CA 10]

  • The process of biometric identification always poses a high risk to people’s rights and freedoms. The GDPR already prohibits it in principle – so it should be exactly the same when AI systems are involved;
  • However, claims about security or safety are often used as a veil to use these invasive technologies against people on the move without proper justification. It is important therefore that the riskiness of these technologies – which have been linked to violations of asylum seekers rights, as well as of workers’ and children’s data protection rights through their use in workplaces and schools – are strictly controlled.

In addition to the above steps to ban biometric mass surveillance practices, we urge the Parliament to prohibit automated behavioural detection in the AI Act to ensure that the prohibition is comprehensive. As seen in France, the automated detection of people’s behaviour in public spaces, for example to profile crowd movements or suspicious objects, can be equally as harmful as when they are identified or profiled based on their protected characteristics.

Ella Jakubowska

Senior Policy Officer

Twitter: @ellajakubowska1