From ‘trustworthy AI’ to curtailing harmful uses: EDRi’s impact on the proposed EU AI Act
Civil society has been the underdog in the European Union's (EU) negotiations on the artificial intelligence (AI) regulation. The goal of the regulation has been to create the conditions for AI to be developed and deployed across Europe, so any shift towards prioritising people’s safety, dignity and rights feels like a great achievement. Whilst a lot needs to happen to make this shift a reality in the final text, EDRi takes stock of it’s impact on the proposed Artificial Intelligence Act (AIA). EDRi and partners mobilised beyond organisations traditionally following digital initiatives managing to establish that some uses of AI are simply unacceptable.
Civil society has been the underdog in the European Union’s (EU) negotiations on the artificial intelligence (AI) regulation. The goal of the regulation has been to create the conditions for AI to be developed and deployed across Europe, so any shift towards prioritising people’s safety, dignity and rights feels like a great achievement. Whilst a lot needs to happen to make this shift a reality in the final text, EDRi takes stock of it’s impact on the proposed Artificial Intelligence Act (AIA). EDRi and partners mobilised beyond organisations traditionally following digital initiatives managing to establish that some uses of AI are simply unacceptable.
In 2018 the European Commission focused on blurry standards of ‘ethics’. By 2021, the European Commission’s President von der Leyen recognised that “in the case of applications that would be simply incompatible with fundamental rights, we may need to go further.” Yet, the proposed Artificial intelligence Act (AIA) still focuses on facilitating the uptake of AI in Europe. Increasingly – and under the pressure of civil society – the European Commission has shown willingness to curtail the most harmful impact of this design and deployment. As AI will impact all aspects of our lives, European legislators must get this right.
This blog shares initial insights on the impact of EDRi’s work on the Commission’s proposal. For a detailed analysis of the AIA by the EDRi network, see here.
Small steps towards AI red lines and a ban on biometric mass surveillance
Building on Access Now’s early engagement as a member of the High Level Expert Group on AI where they critiqued the ‘ethics approach’ and advocated for human rights standards and regulation, EDRi’s actions and efforts have focused on requiring the European Commission to set up legal limits against AI uses that are incompatible with human rights. This has resulted in establishing red lines and a so-called prohibition of biometric mass surveillance as legitimate topics on the EU’s agenda. Evidence for this is the inclusion of prohibitions in the final proposal compared to the White Paper, which only used a risk-based approach and the suppression of a reference to a moratorium.
The recognition, at the highest levels of the European Commission, as well as in the text, that some uses are simply “unacceptable” is a notable step. The European Commission has listened, and taken the time to send detailed answers to our open letters. A member of Commissioner Vestager’s cabinet noted the increase of civil society influence over the proposal, remarking on NGOs’ impact on the proposal by stating: ‘I would not have imagined this a year ago’.
A member of Commissioner Vestager’s cabinet noted the increase of civil society influence over the proposal, remarking on NGOs’ impact on the proposal by stating: ‘I would not have imagined this a year ago’.
Throughout the draft regulation, EDRi’s language is reflected in the proposal, for example “mass surveillance in publicly accessible spaces”, “access to and enjoyment of essential private services and public services and benefits”, AI systems that “perpetuate historical patterns of discrimination”, discrimination against “persons or groups”. Access Now’s call for a public register of AI systems was also partially answered by the establishment of the EU database of high risk AI systems.
This was made possible thanks to the growing support of the European Parliament and significant successes in the European Parliament’s reports on AI. In particular, some reports have started to reflect our policy recommendations relating to the use of AI applications in judicial proceedings or predictive policing, AI and discrimination and biometric mass surveillance. 116 MEPs wrote a letter supporting our open letter and calling for red lines and for fundamental rights to be at the center of the AI regulation.
Effective press and communication actions have put the EDRi network on the map as a key actor on AI in Brussels and beyond. EDRi was mentioned 25 times in the press in the first cycle of responses to the AIA, including in Politico, EU Observer, Euronews, The Economist, NYT, Reuters, The Verge, the Wall Street Journal, Wired, the Financial Times and Bloomberg. Coverage was also available in German, Italian, Spanish, French and Greek in addition to English. This is the result of both the official press release and EDRi’s “hot takes” twitter thread which lead to 164.7K impressions on the day of the live tweeting and 100 new followers.
The unique role of the Reclaim Your Face campaign
The Reclaim Your Face campaign and European Citizens’ Initiative (ECI) calling for a ban of biometric mass surveillance practices has played a determining role in our tools for public pressure. The fact that the European Commission had to validate the ECI when it launched, forced a public recognition of the EU institution’s competence on the matter of biometric mass surveillance. A number of European Commission staff have publicly and privately stated that Article 5 of the AIA (prohibitions) would not have been there without civil society advocacy and campaigning, including attention raised by the ECI .
Campaign efforts have also shifted the narrative related to biometric mass surveillance, evidenced by two recent national Data Protection Authorities decisions in Italy and the Netherlands. Both the Greens/EFA and the Left groups in the European Parliament have explicitly supported the ECI, with now over 10% of the European Parliament having publicly endorsed the campaign through open letters, and almost half of the Parliament (283 MEPs) supporting it in a key vote.
EDRi’s mobilising and inclusive role
EDRi has deliberately promoted voices and expertise that are not normally heard or valued in EU decision-making processes. Significant coalition building efforts, such as Dignity Dignity workshops with groups representing those most impacted by AI systems and the mobilisation of over 60 human rights organisations has emphasied the need for the most marginalised in our society to be actively included in campaigning and the policy-making process. This approach had a concrete impact on the final proposal, for instance on predictive policing.
What we learned
EDRi is still working though substantive feedback and reflective sessions to inform its strategy moving forward. So far, we have learned that coordination within and beyond the EDRi network is effective but requires a strategic approach and sustained efforts. Centering the voices and expertise of people affected by AI systems matter and can lead to change such as countering corporate lobbying. Once more, we have realised the relevance for policy makers to benefit from specific case studies that NGOs with access to local or national actors can bring. On the legal front, we benefited from the early work of the Fundamental Rights Agency (FRA) and of our own network of legal professionals to make clear requests towards future legislation.
People in Europe are concerned about states and corporations making creepy predictions based on AI and biometric data. The growing movement around the Reclaim Your Face campaign is pressuring legislators to put in place the necessary limits before illegal and unsafe systems are employed outside of democratic oversight.
EDRi will continue to center human rights to prevent AI discrimination and mass surveillance. You can join the coalition by signing the European Citizens Initiative to ban biometric mass surveillance.
Image credit: Lorenzo Miola/ Fine Arts
(Contribution by:)
- EDRi: EU’s AI law needs major changes to prevent discrimination and mass surveillance (28.04.2021)
- EDRi: Artificial Intelligence and Fundamental Rights: Document Pool (12.04.2021)
- Access Now: Europe’s Approach to Artificial Intelligence: How AI strategy is Evolving (09.12.2020)
- Panoptykon Foundation: Black-Boxed Politics (17.02.2020)