EU’s AI law needs major changes to prevent discrimination and mass surveillance
The European Commission has just launched the its proposed regulation on artificial intelligence (AI). As governments and companies continue to use AI in ways that lead to discrimination and surveillance, the proposed law must go much further to protect people and their rights. Here’s a deeper analysis from the EDRi network, including some initial recommendations for change.
Filter resources
-
EU’s AI law needs major changes to prevent discrimination and mass surveillance
The European Commission has just launched the its proposed regulation on artificial intelligence (AI). As governments and companies continue to use AI in ways that lead to discrimination and surveillance, the proposed law must go much further to protect people and their rights. Here’s a deeper analysis from the EDRi network, including some initial recommendations for change.
Read more
-
New AI law proposal calls out harms of biometric mass surveillance, but does not resolve them
On 21 April 2021, the European Commission put forward a proposal for a new law on artificial intelligence. With it, the Commission acknowledged some of the numerous threats biometric mass surveillance poses for our freedoms and dignity. However, despite its seemingly good intentions, the proposed law falls seriously short on our demands and does not in fact impose a ban on most cases of biometric mass surveillance – as urged by EDRi and the Reclaim Your Face coalition.
Read more
-
Why EU needs to be wary that AI will increase racial profiling
Central to predictive policing systems is the notion that risk and crime can be objectively and accurately forecasted. Not only is this presumption flawed, it demonstrates a growing commitment to the idea that data can and should be used to quantify, track and predict human behaviour. The increased use of such systems is part of a growing ideology that social issues can be solved by allocating more power, resources - and now technologies - to police.
Read more
-
Regulating Border Tech Experiments in a Hostile World
We are facing a growing panopticon of technology that limits people’s movements, their ability to reunite with their families, and at the worst of times, their ability to stay alive. Power and knowledge monopolies are allowed to exist because there is no unified global regulatory regime governing the use of new technologies, creating laboratories for high-risk experiments with profound impacts on people’s lives.
Read more
-
Computers are binary, people are not: how AI systems undermine LGBTQ identity
Companies and governments are already using AI systems to make decisions that lead to discrimination. When police or government officials rely on them to determine who they should watch, interrogate, or arrest — or even “predict” who will violate the law in the future — there are serious and sometimes fatal consequences. EDRi's member Access Now explain how AI can automate LGBTQ oppression.
Read more
-
EU’s AI proposal must go further to prevent surveillance and discrimination
The European Commission has just launched the EU draft regulation on artificial intelligence (AI). AI systems are being increasingly used in all areas of life – to monitor us at protests, to identify us for access to health and public services, to make predictions about our behaviour or how much ‘risk’ we pose. Without clear safeguards, these systems could further the power imbalance between those who develop and use AI and those who are subject to them.
Read more
-
Civil society calls for stronger protections for fundamental rights in Artificial Intelligence law
In light of the recently leaked draft of the Regulation on A European Approach For Artificial Intelligence from January 2021 , EDRi and 14 of our members signed an open letter to the president of the European Commission Ursula von der Leyen to underline the importance of ensuring the necessary protections for fundamental rights in the new regulation.
Read more
-
Artificial Intelligence and Fundamental Rights: Document Pool
Find in this doc pool all EDRi analyses and documents related to Artificial Intelligence (AI) and fundamental rights
Read more
-
Evidence shows a European future that is dystopian: #ReclaimYourFace now to protect your city
The latest evidence shows that biometric mass surveillance is rapidly being developed and deployed in Europe without a proper legal basis or respect for our agency as self-determined and autonomous individuals. No one is safe, as our most sensitive data like our faces, eyes, skin, palm veins, and fingerprints are being tracked, traced and analysed on social media, in the park, on the bus, or at work.
Read more
-
European Commission must ban biometric mass surveillance practices, say 56 civil society groups
On 1 April, a coalition of 56 human rights, digital rights and social justice organisations sent a letter to European Commissioner for Justice, Didier Reynders, ahead of the long-awaited proposal for new EU laws on artificial intelligence. The coalition is calling on the Commissioner to prohibit uses of biometrics that enable mass surveillance or other dangerous and harmful uses of AI.
Read more
-
No faces left to hack: #ReclaimYourFace Now!
We cannot let power-hungry and profit-orientated technologies manipulate our future, take away our dignity and treat us like walking, breathing barcodes. We have the right to exercise our autonomy and self-determination free from abusive practices undermining our agency. The Reclaim Your Face’s ECI empowers Europeans to move and shape the public debate on the use of these AI-powered biometric technologies. The EU has the chance to show that people sit at the center of its values, by taking the lead to ban biometric mass surveillance that endangers our freedoms, democracies and futures.
Read more
-
The EU should regulate AI on the basis of rights, not risks
EDRi's member Access Now explains why the upcoming legislative proposal on AI should be a rights-based law, like the GDPR. The European Commission must not compromise our rights by substituting a mere risk mitigation exercise by the very actors with a vested interest in rolling out this technology.
Read more