One year of the AI Act: What’s the political and legal landscape now?
The EU Artificial Intelligence (AI) Act came into force on August 1, 2024. This blog takes stock of the political and legal landscape facing its implementation and enforcement one year on, especially efforts to delay or even gut the law which would have far-reaching effects on people’s rights, especially when it comes to migration and law enforcement use of AI.
One year anniversary of an imperfect law
After years of negotiations, on 1 August 2024, the Artificial Intelligence (AI) Act became an official EU law. This was a landmark development showing that governments can and will draw red lines against the most harmful uses of AI in our society, such as those that try to use our faces and bodies against us, or strip us of our individual rights and supercharge discrimination by making a judgement based on our group identity or community. The law also established an important framework for more transparency and better governance of AI systems used by both private and government actors.
At the same time, civil society organisations, including EDRi, were generally lukewarm about the final law, especially the fact that it did not take a more human rights-based approach. Amongst other issues, the law failed to adequately protect people on the move, included a glaring exception of AI use for national security purposes, and no prohibitions on the export of rights-violating AI systems beyond the EU.
Following the AI Act entering into force, civil society organisations advocated for a strong and rights-respecting implementation at the EU and national level. We recommended decision-makers to focus on:
- better protections for people on the move
- addressing the environmental impacts of AI
- removing blanket exemptions for the use of AI for national security purposes
- making sure that all systems (not just high-risk ones) are accessible for people with disabilities
- stopping the export of rights-violating AI technologies from the EU
Where are we now?
One year on, the EU is in the middle of the implementation and enforcement of the AI Act, both of which have faced criticism from civil society for being riddled with delays, sub-par consultations with affected communities, and undue influence of industry lobbies. In general, the standards of good implementation and good enforcement have not been met, and this is deeply worrying because rigorous and transparent implementation is the foundation for legal certainty and for protecting people from harm.
Some examples of the poor enforcement and implementation process include the national amendments adopted by Hungary in March 2025 that would permit the use of real-time remote biometric identification (RBI) against protesters – such as people attending Budapest Pride – and minor infractions. The use of real-time RBI in public is limited by the EU AI Act and Hungary’s use of it does not meet the criteria laid out by the law. Despite tireless pushing by civil society organisations, EU lawmakers did not intervene in Hungary, putting thousands of people at risk of fundamental rights violations.
Additionally, key implementation processes, such as the standardisation process and the process for laying out the General Purpose AI Code of Practice have received widespread criticism from Members of European Parliament and civil society for being heavily influenced by industry. Corporate Europe Observatory (CEO) and Lobby Control even filed a complaint with the European Ombudsperson against the AI Office over a conflict of interest: two consultancies that the Commission hired to assist with the drafting of AI rules have a direct commercial interest in the AI market.
In another example, the process for interested stakeholders to provide feedback on the guidelines on prohibited AI – a key document about how governments and companies should interpret AI bans – was open for a mere four weeks only. Fortunately, the final guidelines generally take an approach that foregrounds the protection of fundamental rights. They also resolve some of the key issues in the Act – for example, closing the dangerous loophole around what counts as a “national security” use, which otherwise could see governments claiming vague exemptions from the rules without credible justification.
Worryingly, we are now seeing some EU lawmakers seemingly caving to calls from tech lobbies, bolstered by the Trump administration, to postpone or even pause the implementation of the AI Act, and to gut parts of the law before it has even been fully implemented. The enforcement and implementation of the AI Act so far desperately needs more political will from the highest levels of the European Commission, as well as from EU Member State governments, in order to make the law a success. However, the implementation challenges and errors so far are now being used as a Trojan horse to weaken fundamental rights protections in the AI Act and to put AI hype before the interests of people, our environment and our democratic system.
Unfortunately, to anyone paying attention to recent actions of the European Commission, this is not an isolated change of heart. The EU’s executive branch has been on a rampage – rolling back hard-won environmental protections and human rights victories, while moving ahead with harmful legislation such as extending the data retention period for law enforcement purposes.
EU’s deregulation push
In this current political climate, the EU has prioritised innovation and plans to make the EU an “AI Continent”, regardless of the costs, thus justifying rolling back of fundamental rights and environmental protections. The specter of delaying or reopening the AI Act is symptomatic of this new agenda. What’s more, this unraveling of rights protections is coupled with mobilising large investments of 200 billion euros being invested into AI, such as advocating for AI uptake in public institutions and beyond, as well as infrastructure projects to push AI at any cost. The new EU budget is also bankrolling spending for war and to invest into AI to “fully digitalise border control management”, despite consistent criticism of human rights abuses, racial profiling and algorithmic discrimination.
These costs are not only monetary, even though those are significant too, with 20 billion euros being invested into AI “gigafactories” alone, but also include massive costs to people’s livelihoods, dignity, fundamental rights and the environment in the EU and particularly in global majority countries.
For example, data centers for AI are preventing new homes in Ireland from being connected to the electricity grid. Moreover, data centers are greedy for electricity and water, causing droughts and leading to more carbon dioxide emissions. Critical raw materials are mined for chips in data centers. This happens mostly in global majority countries, reinforcing colonial dynamics and their harmful consequences for the people and the environment, such as land grabbing, severe human rights abuse, and environmental damage.
A broad coalition of civil society organisations, experts and academics are opposing any attempts to delay or re-open the AI Act, particularly in light of the growing trend of deregulation of fundamental rights and environmental protection. This deregulation push risks undermining key accountability mechanisms and hard-won rights enshrined in EU law across a wide range of protections, including for people, the planet, justice and democracy.
We are calling on the Commission to prioritise the rights and well being of the people and the planet over the AI Hype, by:
- Maintain all rights protections in the AI Act, rather than reopening its provisions – and what’s more, focus on filling the gaps for areas or people that are not sufficiently protected by the AI Act
- Prioritise focus on robust enforcement and thorough implementation of both the AI Act and the wider EU digital rulebook.
- Ensure that all proposals and processes remain transparent and inclusive. This should include comprehensive impact assessments and inclusive public consultations.
- Strengthen the implementation of the AI Act at both EU and national levels, ensuring consistent, effective, and Charter-compliant interpretations of prohibited and high-risk AI systems. We also call for strong, independent supervisory authorities; rights-respecting guidelines and codes for interpretation and application of the law; and meaningful Fundamental Rights Impact Assessments (FRIAs).
- Take proactive steps to address gaps left by the AI Act. Urgent actions are needed to better protect people on the move, address the environmental impacts of AI, remove blanket exemptions for national security uses, ensure accessibility for people with disabilities across all AI systems (not just high-risk ones), and halt the export of rights-violating AI technologies from the EU.
- Introduce additional national bans or restrictions on unacceptably harmful AI, where the AI Act allows. In particular, we call for a ban on the use of remote biometric identification (RBI) systems, such as public facial recognition.