Do no harm? How the case of Afghanistan sheds light on the dark practice of biometric intervention
In August 2021, as US military forces exited Afghanistan, the Taliban seized facial recognition systems, highlighting just how a failure to protect people’s privacy can tangibly threaten their physical safety and human rights. Far from being good tools which fell into the wrong hands, the very existence of these systems is part of broader structures of data extraction and exploitation spanning continents and centuries, with a history wrapped up in imperialism, colonialism and control.
Filter resources
-
Do no harm? How the case of Afghanistan sheds light on the dark practice of biometric intervention
In August 2021, as US military forces exited Afghanistan, the Taliban seized facial recognition systems, highlighting just how a failure to protect people’s privacy can tangibly threaten their physical safety and human rights. Far from being good tools which fell into the wrong hands, the very existence of these systems is part of broader structures of data extraction and exploitation spanning continents and centuries, with a history wrapped up in imperialism, colonialism and control.
Read more
-
AI Regulation: The EU should not give in to the surveillance industry
Although it claims to protect our liberties, the recent European Commission’s legislative proposal on artificial intelligence (AI) promotes the accelerated development of all aspects of AI, in particular for security purposes.
Read more
-
No biometric surveillance for Italian students during exams
In September 2021 the Italian Data Protection Authority (DPA) fined Luigi Bocconi University €200 000 for using Respondus, a proctoring software, without sufficiently informing students of the processing of their personal data and, among other violations, for processing their biometric data without a legal basis. Bocconi is a private University based in Milan and during the COVID-19 pandemic introduced Respondus tools to monitor students during remote exams.
Read more
-
EDRi urges Portugal government to oppose proposed video surveillance law
EDRi member and Reclaim Your Face lead organisation D3 (Defesa Dos Direitos Digitais) are raising awareness of how the Portuguese government’s new proposed video surveillance and facial recognition law – which Ministers are trying to rush through the Parliament - amounts to illiberal biometric mass surveillance. It also endangers the very foundations of democracy on which the Republic of Portugal rests.
Read more
-
Artificial intelligence – a tool of austerity
This week Human Rights Watch published a much-needed comment on the EU’s Artificial Intelligence Regulation. As governments increasingly resort to AI systems to administer social security and public services more broadly, there is an ever-greater need to analyse the impact on fundamental rights and the broader public interest.
Read more
-
Facebook deleting facial recognition: Five reasons to take it with a pinch of salt
Voluntary self-regulation from tech giants is superficial and no replacement for actual legislation
Read more
-
The EU Parliament Took a Stance Against AI Mass Surveillance: What are the Global Implications?
The European Parliament's resolution on artificial intelligence in criminal law and its use by the police presents an opportunity for the EU to reconsider its role in the development of such tools, their sale, or use as part of its counter-terrorism and anti-immigration policies abroad.
Read more
-
Celebrating a strong European Parliament stance on AI in law enforcement
On 5 October, following a significant push from across civil society, the European Parliament voted to adopt an important new report on Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters by a promising majority of 377 votes in favour, to 248 against. This followed a tense vote earlier as a majority of MEPs opposed all four attempts from the European People's Party (EPP) to remove key fundamental rights provisions from the report.
Read more
-
EDRi and 41 human rights organisations call on the European Parliament to reject amendments to AI and criminal law report
EDRi and 41 human rights organisations* call on the members of the European Parliament to vote against the new amendments, which enable discriminatory predictive policing and biometric mass surveillance.
Read more
-
Building a coalition for Digital Dignity
In 2020 EDRi started to build the ‘Digital Dignity Coalition’, a group of organisations and activists active at the EU level dedicated to upholding rights in digital spaces and resisting harmful uses of technology. We’ve been organising to understand and resist how technological practices differentiate, target and experiment on communities at the margins - this article sets out what we’ve done so far.
Read more
-
Digital Dignity Document Pool
Digital technologies can have a profound effect on our societies, but sufficient attention is rarely given to how certain applications differentiate between, target and experiment on communities at the margins. This document pool gathers resources for those that are interested in learning about and contesting the harms to dignity and equality that arise from uses of technology and data.
Read more
-
If AI is the problem, is debiasing the solution?
The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi's latest report "Beyond Debiasing: Regulating AI and its Inequalities", authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.
Read more