AI
Filter by...
-
They can hear you: 6 ways tech is listening to you
Voice recognition technology often violates human rights, and it’s popping up more and more. Recently EDRi's member Access Now called out Spotify for developing voice recognition tech that claims to be able to detect gender and emotional state, among other things. But it’s not just Spotify. Some of the most powerful companies in the world are deploying similar abusive tech because harvesting data about you is profitable. The market for voice recognition is growing, expected to be worth a whopping $26.8 billion by 2025.
Read more
-
EU privacy regulators and Parliament demand AI and biometrics red lines
In their Joint Opinion on the AI Act, the EDPS and EDPB “call for [a] ban on [the] use of AI for automated recognition of human features in publicly accessible spaces, and some other uses of AI that can lead to unfair discrimination”. Taking the strongest stance yet, the Joint Opinion explains that “intrusive forms of AI – especially those who may affect human dignity – are to be seen as prohibited” on fundamental rights grounds.
Read more
-
Biometric mass surveillance flourishes in Germany and the Netherlands
In a new research report, EDRi reveals the shocking extent of biometric mass surveillance practices in Germany, the Netherlands and Poland which are taking over our public spaces like train stations, streets, and shops. The EU and its Member States must act now to set clear legal limits to these practices which create a state of permanent monitoring, profiling and tracking of people.
Read more
-
EDRi joins 178 organisations in global call to ban biometric surveillance
From protesters taking to the streets in Slovenia, to the subways of São Paulo; from so-called “smart cities” in India, to children entering French high schools; from EU border control experiments, to the racialised over-policing of people of colour in the US. In each of these examples, people around the world are increasingly and pervasively being subjected to toxic biometric surveillance. This is why EDRi has joined the global Ban Biometric Surveillance coalition, to build on our work in Europe as part of the powerful Reclaim Your Face campaign.
Read more
-
The urgent need to #reclaimyourface
The rise of automated video surveillance is often touted as a quick, easy, and efficient solution to complex societal problems. In reality, roll-outs of facial recognition and other biometric mass surveillance tools constitute a systematic invasion into people’s fundamental rights to privacy and data protection. Like with uses of toxic chemicals, these toxic uses of biometric surveillance technologies need to be banned across Europe.
Read more
-
Challenge against Clearview AI in Europe
This legal challenge relates to complaints filed with 5 European data protection authorities against Clearview AI, Inc. ("Clearview"), a facial recognition technology company building a gigantic database of faces.
Read more
-
From ‘trustworthy AI’ to curtailing harmful uses: EDRi’s impact on the proposed EU AI Act
Civil society has been the underdog in the European Union's (EU) negotiations on the artificial intelligence (AI) regulation. The goal of the regulation has been to create the conditions for AI to be developed and deployed across Europe, so any shift towards prioritising people’s safety, dignity and rights feels like a great achievement. Whilst a lot needs to happen to make this shift a reality in the final text, EDRi takes stock of it’s impact on the proposed Artificial Intelligence Act (AIA). EDRi and partners mobilised beyond organisations traditionally following digital initiatives managing to establish that some uses of AI are simply unacceptable.
Read more
-
Can a COVID-19 face mask protect you from facial recognition technology too?
Mass facial recognition risks our collective futures and shapes us into fear-driven societies of suspicion. This got folks at EDRi and Privacy International brainstorming. Could the masks that we now wear to protect each other from Coronavirus also protect our anonymity, preventing the latest mass facial recognition systems from identifying us?
Read more
-
EU’s new artificial intelligence law risks enabling Orwellian surveillance states
When analysing how AI systems might impact people of colour, migrants and other marginalised groups, context matters. Whilst AI developers may be able to predict and prevent some negative biases, for the most part, such systems will inevitably exacerbate injustice. This is because AI systems are deployed in a wider context of systematic discrimination and violence, particularly in the field of policing and migration.
Read more
-
New AI law proposal calls out harms of biometric mass surveillance, but does not resolve them
On 21 April 2021, the European Commission put forward a proposal for a new law on artificial intelligence. With it, the Commission acknowledged some of the numerous threats biometric mass surveillance poses for our freedoms and dignity. However, despite its seemingly good intentions, the proposed law falls seriously short on our demands and does not in fact impose a ban on most cases of biometric mass surveillance – as urged by EDRi and the Reclaim Your Face coalition.
Read more
-
Why EU needs to be wary that AI will increase racial profiling
Central to predictive policing systems is the notion that risk and crime can be objectively and accurately forecasted. Not only is this presumption flawed, it demonstrates a growing commitment to the idea that data can and should be used to quantify, track and predict human behaviour. The increased use of such systems is part of a growing ideology that social issues can be solved by allocating more power, resources - and now technologies - to police.
Read more
-
Computers are binary, people are not: how AI systems undermine LGBTQ identity
Companies and governments are already using AI systems to make decisions that lead to discrimination. When police or government officials rely on them to determine who they should watch, interrogate, or arrest — or even “predict” who will violate the law in the future — there are serious and sometimes fatal consequences. EDRi's member Access Now explain how AI can automate LGBTQ oppression.
Read more