Council to vote on EU AI Act: What’s at stake?
The EU Council is set to vote on the AI Act on 2 February after three years of negotiation on this legislation. Our civil society AI coalition summarises the latest updates, what is at stake, and civil society's views on the AI Act.
On the 2 February 2024, the EU Council (representing the EU’s 27 Member State governments) will vote on the EU’s Artificial Intelligence Act. After almost 3 years of negotiation since the European Commission released the legal proposal, European Member States will vote to endorse or reject the final Act.
However, some Member States, in particular France and Germany, have challenged the AI Act in the very last moments. If the AI Act is rejected, it will not be due to its insufficient human rights protections (which are of course a serious issue), but because Member States governments chose to prioritise the interests of the AI industry and national law enforcement agencies over human rights.
Why are some Member States challenging the AI Act?
Despite making very few compromises in the negotiations, the governments of France and Germany have argued that the AI Act will be too restrictive, specifically in terms of the rules on general purpose AI systems (GPAI), and both countries raise issues relating to law enforcement agencies. France in particular wanted to have very little transparency and oversight of police and migration control agencies’ use of AI systems. In fact, during its Presidency, France succeeded in introducing a blanket exemption for any use of AI for “national security” purposes, giving Member States a huge loophole to exploit when they wish to deploy AI-based surveillance technologies and bypassing all AI Act human rights safeguards. More recently, France has faced allegations of industry lobbying with conflict of interest in relation to the country’s strong stance against regulation of GPAI.
There are also reasons to be skeptical of motivations from some ministries in Germany. Whilst some German representatives have suggested that their possible opposition to the AI Act is a result of the failure of the Act to ban biometric surveillance in the EU, there is no evidence that Germany fought for this ban in the negotiations. Leaked documents indicate that they may, in fact, be motivated by a dislike for the rules on GPAI.
Over the past weeks, France and Germany attempted to establish a ‘blocking minority’ to oppose the AI Act. To be clear, whilst governments have utilised human rights language as a partial reason to oppose the Act, in reality their motivation appears to be about safeguarding law enforcement and the AI industry from accountability and scrutiny for the use of AI. The latest indications are that countries like Germany intend to withdraw objections, suggestions there will be no minority blocking the Act.
This situation is part of a broader flaw in EU law-making. The AI Act is the result of a deep power imbalance between EU institutions, where national governments and law enforcement lobbies outweigh those who represent the public interest and human rights. The Parliament was bullied to drop important human rights protections because Member States chose to concede to lobbying from industry and security forces.
What’s at stake if the AI Act is rejected? Who gains from this scenario?
Whilst the final AI Act text is far from perfect, and major shortcomings remain from a human rights perspective, a total rejection would also pose significant risks for the future of AI regulation.
A rejection would mean losing the (albeit, limited) framework of accountability and transparency for high-risk AI development and use. The text introduces transparency to the public when some high-risk systems are used, requirements for assessing fundamental rights impact, as well as a framework of technical standard-setting for AI development and sale. In short, the text places some very modest limits on the use of very dangerous AI systems.
What’s more, a rejection of the AI Act would not mean that biometric mass surveillance is banned. In fact, the AI Act allows Member States to implement a full ban on public facial recognition and other biometric surveillance at national level. So whilst the EU AI Act is a missed opportunity to fully ban biometric mass surveillance practices, it nevertheless leaves opportunities for stronger national protections.
Rejection of the AI Act would also pose serious risks for future AI laws. According to a recent study from the European Council on Foreign Relationships, the next EU mandate will see a ‘major shift to the right, with populist parties gaining votes and seats’. Even though Member States already resisted restrictions and oversight of their use of AI in this process, this resistance is only set to increase, with serious consequences for human rights protections. The new Commission and Parliament might consider splitting up parts of the AI Act in order to get weaker rules – or none at all – on “controversial” topics like law enforcement and general purpose AI.
Rejection would be a major win for Big Tech lobbyists and the security industry, who consistently advocated against any accountability and transparency for AI systems. These entities would be celebrating that they could continue to operate in secrecy and deploy dangerous, racist AI systems that harm the most marginalised in society.
While the AI Act is far from setting the global standard for human rights-based AI regulation it aims to be, the EU’s failure to adopt it after years of negotiations would risk sending the harmful message that “AI is too difficult” to regulate, therefore playing into Big Tech’s narrative and lobbying for “self-regulation.”
Civil society: What are our views on the AI Act?
Since even before the release of the original AI Act proposal, civil society have been demanding a clear framework of human rights protection for the use of dangerous AI systems. In November 2021, over 100 civil society organisations called for concrete changes in the Act to put fundamental rights first.
If the AI Act passes, it will introduce some important improvements designed to enhance technical standards and increase accountability and transparency for the use of high-risk AI. It will even take (very limited) steps to prohibit some AI uses. However, there are also severe causes for concern with the final AI Act text. Putting aside those areas where the EU has simply not gone far enough to prevent harm, the AI Act may contribute to broader shifts which expand and legitimise the surveillance activities of police and migration control authorities, resulting in major implications for fundamental rights in the face of AI systems.
Look out for further analysis from our coalition of civil society organisations soon.
This blog was co-drafted by Ella Jakubowska and Sarah Chander from EDRi, and Caterina Rodelli and Daniel Leufer from Access Now, with contributions from the AI Core group.