artificial intelligence
Filter by...
-
2021: Looking back at digital rights in the year of resilience
We started 2021, hoping to leave the tremendously challenging year of 2020 behind. The Covid-19 pandemic has had a devastating impact on our societies, causing unprecedented harm to people and economies. If 2020 was the year of the pandemic shock, 2021 was the year of resilience. We had to learn to live in a constant uncertainty of what it would take to keep defending human rights: Could we work and walk down the streets without being constantly surveilled? Would efforts to tackle disinformation distort legitimate content, or would they bring down Big Tech instead? Will 2022 be 2021 2.0?
Read more
-
The ICO provisionally issues £17 million fine against facial recognition company Clearview AI
Following EDRi member Privacy International's (PI) submissions before the UK Information Commissioner's Office (ICO), as well as other European regulators, the ICO has announced its provisional intent to fine Clearview AI.
Read more
-
Civil society calls on the EU to put fundamental rights first in the AI Act
Today, 30 November 2021, European Digital Rights (EDRi) and 119 civil society organisations launched a collective statement to call for an Artificial Intelligence Act (AIA) which foregrounds fundamental rights.
Read more
-
Artificial intelligence – a tool of austerity
This week Human Rights Watch published a much-needed comment on the EU’s Artificial Intelligence Regulation. As governments increasingly resort to AI systems to administer social security and public services more broadly, there is an ever-greater need to analyse the impact on fundamental rights and the broader public interest.
Read more
-
If AI is the problem, is debiasing the solution?
The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi's latest report "Beyond Debiasing: Regulating AI and its Inequalities", authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.
Read more
-
EDRi submits response to the European Commission AI adoption consultation
Today, 3rd of August 2021, European Digital Rights (EDRi) submitted its response to the European Commission’s adoption consultation on the Artificial Intelligence Act (AIA).
Read more
-
They can hear you: 6 ways tech is listening to you
Voice recognition technology often violates human rights, and it’s popping up more and more. Recently EDRi's member Access Now called out Spotify for developing voice recognition tech that claims to be able to detect gender and emotional state, among other things. But it’s not just Spotify. Some of the most powerful companies in the world are deploying similar abusive tech because harvesting data about you is profitable. The market for voice recognition is growing, expected to be worth a whopping $26.8 billion by 2025.
Read more
-
EU privacy regulators and Parliament demand AI and biometrics red lines
In their Joint Opinion on the AI Act, the EDPS and EDPB “call for [a] ban on [the] use of AI for automated recognition of human features in publicly accessible spaces, and some other uses of AI that can lead to unfair discrimination”. Taking the strongest stance yet, the Joint Opinion explains that “intrusive forms of AI – especially those who may affect human dignity – are to be seen as prohibited” on fundamental rights grounds.
Read more
-
The urgent need to #reclaimyourface
The rise of automated video surveillance is often touted as a quick, easy, and efficient solution to complex societal problems. In reality, roll-outs of facial recognition and other biometric mass surveillance tools constitute a systematic invasion into people’s fundamental rights to privacy and data protection. Like with uses of toxic chemicals, these toxic uses of biometric surveillance technologies need to be banned across Europe.
Read more
-
From ‘trustworthy AI’ to curtailing harmful uses: EDRi’s impact on the proposed EU AI Act
Civil society has been the underdog in the European Union's (EU) negotiations on the artificial intelligence (AI) regulation. The goal of the regulation has been to create the conditions for AI to be developed and deployed across Europe, so any shift towards prioritising people’s safety, dignity and rights feels like a great achievement. Whilst a lot needs to happen to make this shift a reality in the final text, EDRi takes stock of it’s impact on the proposed Artificial Intelligence Act (AIA). EDRi and partners mobilised beyond organisations traditionally following digital initiatives managing to establish that some uses of AI are simply unacceptable.
Read more
-
EU’s new artificial intelligence law risks enabling Orwellian surveillance states
When analysing how AI systems might impact people of colour, migrants and other marginalised groups, context matters. Whilst AI developers may be able to predict and prevent some negative biases, for the most part, such systems will inevitably exacerbate injustice. This is because AI systems are deployed in a wider context of systematic discrimination and violence, particularly in the field of policing and migration.
Read more
-
New AI law proposal calls out harms of biometric mass surveillance, but does not resolve them
On 21 April 2021, the European Commission put forward a proposal for a new law on artificial intelligence. With it, the Commission acknowledged some of the numerous threats biometric mass surveillance poses for our freedoms and dignity. However, despite its seemingly good intentions, the proposed law falls seriously short on our demands and does not in fact impose a ban on most cases of biometric mass surveillance – as urged by EDRi and the Reclaim Your Face coalition.
Read more