Artificial intelligence (AI)
Artificial intelligence (AI) refers to a broad range of processes and technologies enabling computers to complement or replace tasks otherwise performed by humans. Such systems have the ability to exacerbate surveillance and intrusion into our personal lives, reflect and reinforce some of the deepest societal inequalities, fundamentally alter the delivery of public and essential services, undermine vital data protection legislation, and disrupt the democratic process itself. In the face of this, EDRi strives to uphold our fundamental rights, democracy, equality and justice in all legislation, policy and practice related to artificial intelligence.
Filter resources
-
The voices of human rights defenders affected by the Pegasus spyware must be heard
EDRi and 22 civil society organisations urge the established European Parliament’s Committee of Inquiry to investigate the use of Pegasus and equivalent surveillance spyware to ensure that the systematic targetting of human rights defenders with these technologies is fully examined by the Committee, and that the voices of human rights defenders affected are heard.
Read more
-
The EU AI Act: How to (truly) protect people on the move
The European Union Artificial Intelligence Act (EU AI Act) aims to promote the uptake of trustworthy AI and, at the same time, protect the rights of all people affected by AI systems. While EU policymakers are busy amending the text, one important question springs to mind: whose rights are we talking about?
Read more
-
Will the European Parliament stand up for our rights by prohibiting biometric mass surveillance in the AI Act?
On 10 May, EDRi and 52 organisations wrote to the Members of the European Parliament to ask them to ban the remote use of these technologies in publicly accessible spaces to protect all the places where we exercise our rights and come together as communities from becoming sites of mass surveillance where we are all treated as suspects.
Read more
-
Regulating Migration Tech: How the EU’s AI Act can better protect people on the move
As the European Union amends the Artificial Intelligence Act (AI Act) exploring the impact of AI systems on marginalised communities is vital. AI systems are increasingly developed, tested and deployed to judge and control migrants and people on the move in harmful ways. How can the AI Act prevent this?
Read more
-
Civil society reacts to European Parliament AI Act draft Report
This joint statement evaluates how far the IMCO-LIBE draft Report on the EU’s Artificial Intelligence (AI) Act, released 20th April 2022, addresses civil society's recommendations. We call on Members of the European Parliament to support amendments that centre people affected by AI systems, prevent harm in the use of AI systems, and offer comprehensive protection for fundamental rights in the AI Act.
Read more
-
The EU’s Artificial Intelligence Act: Civil society amendments
Artificial Intelligence (AI) systems are increasingly used in all areas of public life. It is vital that the AI Act addresses the structural, societal, political and economic impacts of the use of AI, is future-proof, and prioritises affected people, the protection of fundamental rights and democratic values. The following issue papers detail the amendments of civil society following the Civil Society Statement on the AI Act, released in November 2021.
Read more
-
The European Parliament must go further to empower people in the AI act
Today, 21 April, POLITICO Europe published a leak of the much-anticipated draft report on the Artificial Intelligence (AI) Act proposal. The draft report has taken important steps towards a more people-focused approach, but it has failed to introduce crucial red lines and safeguards on the uses of AI, including ‘place-based’ predictive policing systems, remote biometric identification, emotion recognition, discriminatory or manipulative biometric categorisation, and uses of AI undermining the right to asylum.
Read more
-
How can you influence the AI Act in order to ban biometric mass surveillance across Europe?
The EU is currently negotiating the Artificial Intelligence (AI) Act. This future law offers the chance to effectively ban biometric mass surveillance. This article aims to offer an overview of how the EU negotiates its laws and the key AI Act moments in which people can make their voices heard.
Read more
-
About ClearviewAI’s mockery of human rights, those fighting it, and the need for EU to intervene
Clearview AI describes itself as ‘The World’s Largest Facial Network’. However, a quick search online would reveal that the company has been involved in several scandals, covering the front page of many publications for all the wrong reasons. In fact, since New York Times broke the story about Clearview AI in 2020, the company has been constantly criticised by activists, politicians, and data protection authorities around the world. Read below a summary of the many actions taken against the company that hoarded 10 billion images of our faces.
Read more
-
The Clearview/Ukraine partnership – How surveillance companies exploit war
Clearview announced it will offer its surveillance tech to Ukraine. It seems no human tragedy is off-limits to surveillance companies looking to sanitise their image.
Read more
-
EU AI Act needs clear safeguards for AI systems for military and national security purposes
EDRi affiliate ECNL presents the second set of their proposals on exemptions and exclusions of AI used for military and national security purposes from the AIA, also endorsed by European Digital Rights (EDRi), Access Now, AlgorithmWatch, ARTICLE 19, Electronic Frontier Finland (EFFI), Electronic Privacy Information Center (EPIC) and Panoptykon Foundation.
Read more
-
Italian DPA fines Clearview AI for illegally monitoring and processing biometric data of Italian citizens
On 9 March 2022, the Italian Data Protection Authority fined the US-based facial recognition company Clearview AI EUR 20 million after finding that the company monitored and processed biometric data of individuals on Italian territory without a legal basis. The fine is the highest expected according to the General Data Protection Regulation, and it was motivated by a complaint sent by the Hermes Centre in May 2021 in a joint action with EDRi members Privacy International, noyb, and Homo Digitalis—in addition to complaints sent by some individuals and to a series of investigations launched in the wake of the 2020 revelations of Clearview AI business practices.
Read more