artificial intelligence
Filter by...
-
For a truly “Trustworthy AI,” EU must protect rights and deliver benefits
EDRi member Access Now published a report exploring the actions EU governments are taking to promote what the EU calls Trustworthy AI, what this approach means for human rights, and how European AI strategy is changing, both for EU institutions and national governments.
Read more
-
Technological Testing Grounds: Border tech is experimenting with people’s lives
The European Union is increasingly experimenting with high risk migration management technologies.
Read more
-
Attention EU regulators: we need more than AI “ethics” to keep us safe
In this post, Access Now and European Digital Rights (EDRi) analyse recent developments in the EU AI debate and explain why we need a bold, bright-line approach that prioritises our fundamental rights.
Read more
-
Technology has codified structural racism – will the EU tackle racist tech?
The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology, writes Sarah Chander.
Read more
-
EDRi submits response to the European Commission AI consultation – will you?
Today, 4th June 2020, European Digital Rights (EDRi) submitted its response to the European Commission’s public consultation on artificial intelligence (AI). In addition, EDRi released its recommendations for a fundamental rights-based Artificial Intelligence Regulation.
Read more
-
Can the EU make AI “trustworthy”? No – but they can make it just
European Digital Rights (EDRi) submitted its answer to the European Commission’s consultation on the AI White Paper.
Read more
-
Technology, migration, and illness in the times of COVID-19
In our ongoing work on technology and migration, we examine the impacts of the current COVID-19 pandemic on the rights of people on the move and the increasingly worrying use of surveillance technology and AI at the border and beyond.
Read more
-
COVID-Tech: Emergency responses to COVID-19 must not extend beyond the crisis
In EDRi's new series on COVID-19, we will explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network's statement on the virus. Each post in this series will tackle a specific issue at the intersection of digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis.
Read more
-
Facial Recognition & Biometric Mass Surveillance: Document Pool
Despite evidence that public facial recognition and other forms of biometric mass surveillance infringe on a wide range EU fundamental rights, European authorities and companies are deploying these systems at a rapid rate. This has happened without proper consideration for how such practices invade people's privacy on an enormous scale; amplify existing inequalities; and undermine democracy, freedom and justice.
Read more
-
Can we rely on machines making decisions for us on illegal content?
While automation is necessary for handling a vast amount of content shared by users, it makes mistakes that can be far-reaching for your rights and the well-being of society. Most of us like to discuss our ideas and opinions on silly and serious issues, share happy and sad moments, and play together on the internet. […]
Read more
-
A human-centric internet for Europe
The European Union has set digital transformation as one of its key pillars for the next five years. New data-driven technologies, including Artificial Intelligence (AI), offer societal benefits – but addressing their potential risks to our democratic values, the rule of law, and fundamental rights must be a top priority. “By driving a human rights-centric […]
Read more
-
The human rights impacts of migration control technologies
This is the first blogpost of a series on our new project which brings to the forefront the lived experiences of people on the move as they are impacted by technologies of migration control. The project, led by our Mozilla Fellow Petra Molnar, highlights the need to regulate the opaque technological experimentation documented in and […]
Read more