Biometrics
Biometrics refers to the use of analytical tools and devices which capture, record and/or process people’s physical or behavioural data. This can include faces (commonly known as facial recognition), fingerprints, DNA, irises, walking style (gait), voice as well as other characteristics. Under the EU’s data protection laws, this biometric data is especially sensitive. It is linked to our individual identities, and can also reveal protected and intimate information about who we are, where we go, our health status and more. When used to indiscriminately target people in public spaces, to predict behaviours and emotions, or in situations of imbalances of power, biometric surveillance such as facial recognition has been shown to violate a wide range of fundamental rights.
Filter resources
-
Warnings against arbitrariness and mass surveillance in EURODAC
Four Members of the European Parliament in charge of leading the negotiations sent additional questions on data protection related to the EURODAC proposal to the EDPS. This comes as a result of the concerns of fundamental rights violations raised by several organisations protecting the rights of people on the move, children and digital rights, including EDRi, in an open letter. Read a summary of the EDPS' answers.
Read more
-
Mid-point EDRi strategy review: impact and adjustments in a changing field
In April 2020, during the early months of the Covid-19 pandemic in Europe, EDRi adopted its first network multi-annual strategy for the years 2020-2024. At the mid-term of the strategy implementation, what have we learned?
Read more
-
Football fans are being targeted by biometric mass surveillance
Apart from its undemocratic nature, there are many reasons why biometric mass surveillance is problematic for human rights and footabll fans’ rights.
Read more
-
European Parliament calls loud and clear for a ban on biometric mass surveillance in AI Act
After our timely advocacy actions with over 70 organisations, the amendments to the IMCO - LIBE Committee Report for the Artificial Intelligence Act clearly state the need for a ban on Remote Biometric Identification. In fact, 24 individual MEPs representing 158 MEPs, demand a complete ban on biometric mass surveillance practices. Now we need to keep up the pressure at European and national levels to ensure that when the AI Act is officially passed, likely in 2023 or 2024, it bans biometric mass surveillance.
Read more
-
New EU law amplifies risks of state over-reach and mass surveillance
The EDRi network published its position paper on the proposed Regulation on automated data exchange for police cooperation (“Prüm II”). The European Commission’s Prüm II proposal fails to put in place vital safeguards designed to protect all of us from state overreach and authoritarian mass surveillance practices. In the worst case scenario, we may no longer be able to walk freely on our streets as the new law would treat large parts of the population as a criminal before proven otherwise.
Read more
-
Policing: Council of the European Union close to approving position on extended biometric data-sharing network
The Council of the European Union is close to reaching an agreement on its negotiating position on the 'Prüm II' Regulation, which would extend an existing police biometric data-sharing network to include facial images and offer the possibility for national authorities to open up their databases of "police records" for searches by other member states.
Read more
-
The AI Act: EU’s chance to regulate harmful border technologies
The AI Act will be the first regional mechanism of its kind in the world, but it needs a serious update to meaningfully address the profileration of harmful technologies tested and deployed at Europe’s borders.
Read more
-
Will the European Parliament stand up for our rights by prohibiting biometric mass surveillance in the AI Act?
On 10 May, EDRi and 52 organisations wrote to the Members of the European Parliament to ask them to ban the remote use of these technologies in publicly accessible spaces to protect all the places where we exercise our rights and come together as communities from becoming sites of mass surveillance where we are all treated as suspects.
Read more
-
Regulating Migration Tech: How the EU’s AI Act can better protect people on the move
As the European Union amends the Artificial Intelligence Act (AI Act) exploring the impact of AI systems on marginalised communities is vital. AI systems are increasingly developed, tested and deployed to judge and control migrants and people on the move in harmful ways. How can the AI Act prevent this?
Read more
-
Civil society reacts to European Parliament AI Act draft Report
This joint statement evaluates how far the IMCO-LIBE draft Report on the EU’s Artificial Intelligence (AI) Act, released 20th April 2022, addresses civil society's recommendations. We call on Members of the European Parliament to support amendments that centre people affected by AI systems, prevent harm in the use of AI systems, and offer comprehensive protection for fundamental rights in the AI Act.
Read more
-
The EU’s Artificial Intelligence Act: Civil society amendments
Artificial Intelligence (AI) systems are increasingly used in all areas of public life. It is vital that the AI Act addresses the structural, societal, political and economic impacts of the use of AI, is future-proof, and prioritises affected people, the protection of fundamental rights and democratic values. The following issue papers detail the amendments of civil society following the Civil Society Statement on the AI Act, released in November 2021.
Read more
-
The European Parliament must go further to empower people in the AI act
Today, 21 April, POLITICO Europe published a leak of the much-anticipated draft report on the Artificial Intelligence (AI) Act proposal. The draft report has taken important steps towards a more people-focused approach, but it has failed to introduce crucial red lines and safeguards on the uses of AI, including ‘place-based’ predictive policing systems, remote biometric identification, emotion recognition, discriminatory or manipulative biometric categorisation, and uses of AI undermining the right to asylum.
Read more