artificial intelligence
Filter by...
-
EU’s new artificial intelligence law risks enabling Orwellian surveillance states
When analysing how AI systems might impact people of colour, migrants and other marginalised groups, context matters. Whilst AI developers may be able to predict and prevent some negative biases, for the most part, such systems will inevitably exacerbate injustice. This is because AI systems are deployed in a wider context of systematic discrimination and violence, particularly in the field of policing and migration.
Read more
-
New AI law proposal calls out harms of biometric mass surveillance, but does not resolve them
On 21 April 2021, the European Commission put forward a proposal for a new law on artificial intelligence. With it, the Commission acknowledged some of the numerous threats biometric mass surveillance poses for our freedoms and dignity. However, despite its seemingly good intentions, the proposed law falls seriously short on our demands and does not in fact impose a ban on most cases of biometric mass surveillance – as urged by EDRi and the Reclaim Your Face coalition.
Read more
-
Why EU needs to be wary that AI will increase racial profiling
Central to predictive policing systems is the notion that risk and crime can be objectively and accurately forecasted. Not only is this presumption flawed, it demonstrates a growing commitment to the idea that data can and should be used to quantify, track and predict human behaviour. The increased use of such systems is part of a growing ideology that social issues can be solved by allocating more power, resources - and now technologies - to police.
Read more
-
Computers are binary, people are not: how AI systems undermine LGBTQ identity
Companies and governments are already using AI systems to make decisions that lead to discrimination. When police or government officials rely on them to determine who they should watch, interrogate, or arrest — or even “predict” who will violate the law in the future — there are serious and sometimes fatal consequences. EDRi's member Access Now explain how AI can automate LGBTQ oppression.
Read more
-
EU’s AI proposal must go further to prevent surveillance and discrimination
The European Commission has just launched the EU draft regulation on artificial intelligence (AI). AI systems are being increasingly used in all areas of life – to monitor us at protests, to identify us for access to health and public services, to make predictions about our behaviour or how much ‘risk’ we pose. Without clear safeguards, these systems could further the power imbalance between those who develop and use AI and those who are subject to them.
Read more
-
Artificial Intelligence and Fundamental Rights: Document Pool
Find in this doc pool all EDRi analyses and documents related to Artificial Intelligence (AI) and fundamental rights
Read more
-
The EU should regulate AI on the basis of rights, not risks
EDRi's member Access Now explains why the upcoming legislative proposal on AI should be a rights-based law, like the GDPR. The European Commission must not compromise our rights by substituting a mere risk mitigation exercise by the very actors with a vested interest in rolling out this technology.
Read more
-
This is the EU’s chance to stop racism in artificial intelligence
Human rights mustn’t come second in the race to innovate, they should rather define innovations that better humanity. The European Commission's upcoming proposal may be the last opportunity to prevent harmful uses of AI-powered technologies, many of which are already marginalising Europe's racialised communities.
Read more
-
116 MEPs agree – we need AI red lines to put people over profit
In light of the upcoming proposal for the regulation of artificial intelligence in Europe, 116 Members of the European Parliament (MEPs) have written to the European Commission’s leaders in support of EDRi’s letter calling for red lines on uses of AI that compromise fundamental rights.
Read more
-
We want more than “symbolic” gestures in response to discriminatory algorithms
In an escalating scandal over child benefits, over 26.000 families were wrongly accused of fraud by the Dutch tax authority. Families were forced to repay tens of thousands of euros, resulting in unemployment, divorces, and families losing their homes. EDRi member Bits of Freedom reveals the discriminatory algorithms used by the authority and urges the government to ban their use and develop legislation on Artificial Intelligence.
Read more
-
How to Reclaim Your Face From Clearview AI
The Hamburg Data Protection Authority deemed Clearview AI’s biometric photo database illegal in the EU as a result of a complaint Matthias Marx, a member of the Chaos Computer Club (an EDRi member) filed.
Read more
-
Civil society calls for AI red lines in the European Union’s Artificial Intelligence proposal
European Digital Rights together with 61 civil society organisations have sent an open letter to the European Commission demanding red lines for the applications of AI that threaten fundamental rights.
Read more