Biometrics
Biometrics refers to the use of analytical tools and devices which capture, record and/or process people’s physical or behavioural data. This can include faces (commonly known as facial recognition), fingerprints, DNA, irises, walking style (gait), voice as well as other characteristics. Under the EU’s data protection laws, this biometric data is especially sensitive. It is linked to our individual identities, and can also reveal protected and intimate information about who we are, where we go, our health status and more. When used to indiscriminately target people in public spaces, to predict behaviours and emotions, or in situations of imbalances of power, biometric surveillance such as facial recognition has been shown to violate a wide range of fundamental rights.
Filter resources
-
The #PaperBagSociety challenge
The #PaperBagSociety is a social media challenge part of the #ReclaimYourFace campaign that invites everyone to share online the impact of living life with a paperbag on the head. With it, we aim to raise awareness of how ridiculous is to avoid facial recognition technologies in public spaces and why we need to build an alternative future, free from biometric mass surveillance.
Read more
-
Workplace, public space: workers organising in the age of facial recognition
‘Surveillance capitalism’ is increasingly threatening workers’ collective action and the human right to public protest.
Read more
-
EDRi joins 178 organisations in global call to ban biometric surveillance
From protesters taking to the streets in Slovenia, to the subways of São Paulo; from so-called “smart cities” in India, to children entering French high schools; from EU border control experiments, to the racialised over-policing of people of colour in the US. In each of these examples, people around the world are increasingly and pervasively being subjected to toxic biometric surveillance. This is why EDRi has joined the global Ban Biometric Surveillance coalition, to build on our work in Europe as part of the powerful Reclaim Your Face campaign.
Read more
-
The urgent need to #reclaimyourface
The rise of automated video surveillance is often touted as a quick, easy, and efficient solution to complex societal problems. In reality, roll-outs of facial recognition and other biometric mass surveillance tools constitute a systematic invasion into people’s fundamental rights to privacy and data protection. Like with uses of toxic chemicals, these toxic uses of biometric surveillance technologies need to be banned across Europe.
Read more
-
New win against biometric mass surveillance in Germany
In November 2020, reporters at Netzpolitik.org revealed that the city of Karlsruhe wanted to establish a smart video surveillance system in the city centre. The plan involved an AI system that would analyse the behaviour of passers-by and automatically identify conspicuous behaviour. After the intervention of EDRi-member CCC the project was buried in May 2021.
Read more
-
Challenge against Clearview AI in Europe
This legal challenge relates to complaints filed with 5 European data protection authorities against Clearview AI, Inc. ("Clearview"), a facial recognition technology company building a gigantic database of faces.
Read more
-
From ‘trustworthy AI’ to curtailing harmful uses: EDRi’s impact on the proposed EU AI Act
Civil society has been the underdog in the European Union's (EU) negotiations on the artificial intelligence (AI) regulation. The goal of the regulation has been to create the conditions for AI to be developed and deployed across Europe, so any shift towards prioritising people’s safety, dignity and rights feels like a great achievement. Whilst a lot needs to happen to make this shift a reality in the final text, EDRi takes stock of it’s impact on the proposed Artificial Intelligence Act (AIA). EDRi and partners mobilised beyond organisations traditionally following digital initiatives managing to establish that some uses of AI are simply unacceptable.
Read more
-
Can a COVID-19 face mask protect you from facial recognition technology too?
Mass facial recognition risks our collective futures and shapes us into fear-driven societies of suspicion. This got folks at EDRi and Privacy International brainstorming. Could the masks that we now wear to protect each other from Coronavirus also protect our anonymity, preventing the latest mass facial recognition systems from identifying us?
Read more
-
Washed in blue: living lab Digital Perimeter in Amsterdam
An increasing amount of Dutch government agencies seem to resort to so-called ‘living labs’ and ‘field labs’ in order to test and experiment with technological innovations in a realistic setting. In recent years, these live laboratories have proven to be a useful stepping stone to introduce new technologies into public space. In the last several weeks, EDRi's member Bits of Freedom took a closer look at one of those living labs – the so-called Digital Perimeter surrounding the Johan Cruijff ArenA in Amsterdam – and were not pleased with what they saw.
Read more
-
EU Parliament adopts the Covid Pass: risks for data protection and new forms of discrimination
At first glance, teh Digital Green Certificate may sound interesting, but upon further reflection, it quickly becomes clear that the proposed system has the potential to divide society and expose certificate holders to far-reaching surveillance by the authorities that issue the documents. Even worse, it exacerbates inequalities and increases social exclusion, shares EDRi's member epicenter.works.
Read more
-
Initial wins in Italy just two months after the launch of Reclaim Your Face
Last week, the #ReclaimYourFace campaign reached two important milestones at the national level. On Friday April 16th the Italian Data Protection Authority (DPA) rejected the SARI Real Time facial recognition system acquired by the police saying that the system lacks a legal basis and, as designed, it would implement a form of mass surveillance.
Read more
-
EU’s new artificial intelligence law risks enabling Orwellian surveillance states
When analysing how AI systems might impact people of colour, migrants and other marginalised groups, context matters. Whilst AI developers may be able to predict and prevent some negative biases, for the most part, such systems will inevitably exacerbate injustice. This is because AI systems are deployed in a wider context of systematic discrimination and violence, particularly in the field of policing and migration.
Read more