AI Act: What happens when lawmakers’ faces get scanned with face recognition algorithms?
EDRi member in Italy Hermes Center simulates face recognition on lawmakers to pressure them for a total ban of remote biometric identification (RBI) in the Artificial Intelligence (AI) Act.
Years after joining the Reclaim Your Face movement, the Milan-based Hermes Center for Transparency and Digital Human Rights, along with fellow nonprofits The Good Lobby Italy and info.nodes, has launched a campaign with a similar goal, called “Don’t Spy EU”.
On the campaign’s website, they invite you to scan EU lawmakers’ faces with a face recognition algorithm. By becoming targets of RBI themselves, will these politicians fully understand the risks of this harmful and often inaccurate technology? Read on to find out.
The Don’t Spy EU campaign paints a dystopian yet realistic scenario. Following the legalisation of RBI systems in public spaces, people living in Europe will struggle with government surveillance and the likelihood of getting mistaken for somebody else (yes, the website also features a deepfakes section).
Biometric surveillance would mean the total loss of personal freedom, a breach of the right to privacy and amplified discrimination based on machine biases. It is crucial to ensure that people’s biometric data such as facial footprints collected by CCTV cameras in a public square remain private. The solution is to ban RBI in public spaces. This will allow you to live without constantly having to look over your shoulder, or feeling that you have to conform to someone else’s labels.
However, it seems like a handful of policymakers need some more convincing that your privacy matters. Join the movement of digital rights experts and privacy supporters in calling on lawmakers to vote for a full ban on real-time and post-RBI in public spaces in the European Union’s AI Act. It’s the final month of negotiations, we need to act now.
Here is how you can stop RBI
By using technology already available on the market, Hermes Center has created dontspy.eu (#DontSpyEU), an online space where you can simulate RBI or deepfakes on politicians’ photos. You can also upload new images and help expand the database. So how it works, you wonder? Read on.
This interactive map on the “Faces” section of the website allows you to select a country and view that country’s politicians as “scanned” by the RBI software. Don’t see your favourite or least so politician on the map? Go to the bottom of the page to fill out the form!
You can upload photos of up to 5 politicians per country. When politicians’ faces are scanned, a list of personal information is displayed. The information refers to the name, last name, country, and position/role. You can also see an AI-determined age/gender/emotional state. This will give you an idea of how flawed and inaccurate face recognition algorithms can be.
You cannot miss the deepfakes included in the list of pictures available for RBI scanning. Do you know that the RBI software is virtually incapable of distinguishing between real photos of individuals and deepfakes in which their faces appear? Your political representatives should learn that too!
Join the movement and spread the website content widely. Make sure to use the hashtag #DontSpyEU when posting online to link all efforts of convincing lawmakers. You have a wide range of resources to go for – photos, page screenshots, readings, and links.
As the AI Act’s negotiations resume at the European Parliament, dozens of organisations are mobilising across Europe to put pressure on legislators.
There is overwhelming evidence of RBI constituting a threat to people’s freedom and right to privacy, especially when employed by governments in public spaces. However, some lawmakers in the European Union need more convincing. At this point, they are willing to accept only a ban on real-time biometric recognition and emotional recognition.
So, they want to exclude restrictions/bans on post-RBI and maintain exceptions for the authorities and law enforcement. No ban on post-RBI means that the analysis of biometric data can still happen at any point after the fact using previously captured inputs, most frequently by using CCTV footage.
Imagine that the government can use your highly sensitive personal data to see where you went, what you did, and ith whom you met over the course of weeks, months or even years. Shivering, right?
Furthermore, there is no mention of safeguards for RBI systems used in migration contexts (at the EU border). On the contrary, in the compromise proposal discussed recently, leaked by Politico, we see that lawmakers would allow providers to test high-risk systems on people in real-life contexts (and not virtually simulated, such as in sandboxes) for up to 12 months, after notifying the market surveillance authority.
You need more information on what’s wrong with RBI?
It is the lawmakers’ job to build a law, the AI Act, that guards people’s digital rights. However introducing protection only against the risks linked to RBI systems’ misuse and exploitation is simply not enough. But why should we be protected, in the first place?
The amount of sensitive personal data that governments and private companies could potentially access should RBI ever become legal in public spaces across the EU is potentially enormous. Biometric data is information unique to every individual. Let’s think about fingerprints, voice, way of walking, gestures… and of course, face.
When AI-powered RBI systems are run on CCTV camera footage, they can connect someone’s face to their identity as registered in a database – already “fed into” the system. Now, governments and public institutions are often the ones providing such databases (containing IDs, passports, driving licenses, etc.). By linking face and ID through RBI, they would have easy access to everyone’s identity, and use this piece of information as they please.
The creation of deepfakes also relies on the same foundation models as RBI, as they use the same biometric data. Even though the use of some deepfakes may appear funny or entertaining (think of that Keanu Reeves deepfake dancing in his living room), these models constitute potential harm as they pave the way to disinformation, misrepresentation and online harassment.
A ban on biometric mass surveillance is the only solution for a future where our choices are made by us, not by algorithms
We do not know how the RBI debate will end. But we know that the privacy of individuals is something the public and private sectors have been trying to get their hands on for many years.
The AI Act represents the chance of preventing our society’s failure due to dysfunctional AI development. Regulations can help us shape the future. But only when they drive their decision-making processes based on the culture, expertise and consultation of an informed civil society.
The Don’t Spy EU movement’s message transcends the completion of the AI Act legal cycle. Hermes Center will continue to monitor the development of new technologies and offer support to those negatively impacted by RBI systems, ensuring they get the protection they deserve.
If you enjoy our initiative, don’t forget to spread the word:
- Browse the dontspy.eu website & try out the interactive map
- Provide Hermes Center with some feedback, by contacting
- Share the campaign with your friends to make sure it reaches as many people as possible. On social media, you can tag Hermes Center on LinkedIn (Hermes Center), X (@hermescenter) and Facebook (Hermes Center). Don’t forget to use the #DontSpyEU hashtag. You can also add the following: #HermesCenter, #RBI, #FaceRecognition and #AIAct.
*Don’t Spy EU is the follow-up project to a first campaign we launched in May 2023, Don’t Spy On Us EU, that also featured similar tools
Contribution by: Alessandra Bormioli and Claudio Agosti, EDRi member, Hermes Center for Transparency and Digital Human Rights