Artificial intelligence (AI)
Artificial intelligence (AI) refers to a broad range of processes and technologies enabling computers to complement or replace tasks otherwise performed by humans. Such systems have the ability to exacerbate surveillance and intrusion into our personal lives, reflect and reinforce some of the deepest societal inequalities, fundamentally alter the delivery of public and essential services, undermine vital data protection legislation, and disrupt the democratic process itself. In the face of this, EDRi strives to uphold our fundamental rights, democracy, equality and justice in all legislation, policy and practice related to artificial intelligence.
Filter resources
-
New win against biometric mass surveillance in Germany
In November 2020, reporters at Netzpolitik.org revealed that the city of Karlsruhe wanted to establish a smart video surveillance system in the city centre. The plan involved an AI system that would analyse the behaviour of passers-by and automatically identify conspicuous behaviour. After the intervention of EDRi-member CCC the project was buried in May 2021.
Read more
-
Challenge against Clearview AI in Europe
This legal challenge relates to complaints filed with 5 European data protection authorities against Clearview AI, Inc. ("Clearview"), a facial recognition technology company building a gigantic database of faces.
Read more
-
From ‘trustworthy AI’ to curtailing harmful uses: EDRi’s impact on the proposed EU AI Act
Civil society has been the underdog in the European Union's (EU) negotiations on the artificial intelligence (AI) regulation. The goal of the regulation has been to create the conditions for AI to be developed and deployed across Europe, so any shift towards prioritising people’s safety, dignity and rights feels like a great achievement. Whilst a lot needs to happen to make this shift a reality in the final text, EDRi takes stock of it’s impact on the proposed Artificial Intelligence Act (AIA). EDRi and partners mobilised beyond organisations traditionally following digital initiatives managing to establish that some uses of AI are simply unacceptable.
Read more
-
Can a COVID-19 face mask protect you from facial recognition technology too?
Mass facial recognition risks our collective futures and shapes us into fear-driven societies of suspicion. This got folks at EDRi and Privacy International brainstorming. Could the masks that we now wear to protect each other from Coronavirus also protect our anonymity, preventing the latest mass facial recognition systems from identifying us?
Read more
-
Initial wins in Italy just two months after the launch of Reclaim Your Face
Last week, the #ReclaimYourFace campaign reached two important milestones at the national level. On Friday April 16th the Italian Data Protection Authority (DPA) rejected the SARI Real Time facial recognition system acquired by the police saying that the system lacks a legal basis and, as designed, it would implement a form of mass surveillance.
Read more
-
EU’s new artificial intelligence law risks enabling Orwellian surveillance states
When analysing how AI systems might impact people of colour, migrants and other marginalised groups, context matters. Whilst AI developers may be able to predict and prevent some negative biases, for the most part, such systems will inevitably exacerbate injustice. This is because AI systems are deployed in a wider context of systematic discrimination and violence, particularly in the field of policing and migration.
Read more
-
EU’s AI law needs major changes to prevent discrimination and mass surveillance
The European Commission has just launched the its proposed regulation on artificial intelligence (AI). As governments and companies continue to use AI in ways that lead to discrimination and surveillance, the proposed law must go much further to protect people and their rights. Here’s a deeper analysis from the EDRi network, including some initial recommendations for change.
Read more
-
New AI law proposal calls out harms of biometric mass surveillance, but does not resolve them
On 21 April 2021, the European Commission put forward a proposal for a new law on artificial intelligence. With it, the Commission acknowledged some of the numerous threats biometric mass surveillance poses for our freedoms and dignity. However, despite its seemingly good intentions, the proposed law falls seriously short on our demands and does not in fact impose a ban on most cases of biometric mass surveillance – as urged by EDRi and the Reclaim Your Face coalition.
Read more
-
Why EU needs to be wary that AI will increase racial profiling
Central to predictive policing systems is the notion that risk and crime can be objectively and accurately forecasted. Not only is this presumption flawed, it demonstrates a growing commitment to the idea that data can and should be used to quantify, track and predict human behaviour. The increased use of such systems is part of a growing ideology that social issues can be solved by allocating more power, resources - and now technologies - to police.
Read more
-
Regulating Border Tech Experiments in a Hostile World
We are facing a growing panopticon of technology that limits people’s movements, their ability to reunite with their families, and at the worst of times, their ability to stay alive. Power and knowledge monopolies are allowed to exist because there is no unified global regulatory regime governing the use of new technologies, creating laboratories for high-risk experiments with profound impacts on people’s lives.
Read more
-
Computers are binary, people are not: how AI systems undermine LGBTQ identity
Companies and governments are already using AI systems to make decisions that lead to discrimination. When police or government officials rely on them to determine who they should watch, interrogate, or arrest — or even “predict” who will violate the law in the future — there are serious and sometimes fatal consequences. EDRi's member Access Now explain how AI can automate LGBTQ oppression.
Read more
-
EU’s AI proposal must go further to prevent surveillance and discrimination
The European Commission has just launched the EU draft regulation on artificial intelligence (AI). AI systems are being increasingly used in all areas of life – to monitor us at protests, to identify us for access to health and public services, to make predictions about our behaviour or how much ‘risk’ we pose. Without clear safeguards, these systems could further the power imbalance between those who develop and use AI and those who are subject to them.
Read more