Statement: EU takes modest step as AI law comes into effect
The EU Artificial Intelligence (AI) Act will finally come into force on August 1, 2024. While it's disappointing that the final law did not put people and their rights at the centre, it still contains some silver linings.
On 1 August 2024, the long-awaited Artificial Intelligence (AI) Act enters into force, meaning that after years of negotiations, the AI Act is now an official EU law.
This is a significant administrative step, one which signals to governments and companies alike that they must prepare themselves to abide by these new rules. If they don’t, they could face significant penalties when developing, selling and using risky AI systems in the EU.
Whilst civil society groups like EDRI and our AI Act coalition partners are disappointed not to see a more human rights-based approach, the final Act nevertheless contains several avenues for positive change.
This includes measures like increased technical requirements for developers of AI systems, certain transparency rules for public entities using these systems, and accessibility requirements for risky systems. Other novelties include redress measures for affected people, and red lines against a sub-set of the most harmful and rights-violating uses (even if these did not go as far as we had hoped).
Where the Act failed to fully live up to its promise to put people and their rights at the centre, we will be working hard as a coalition to make the implementation of these rules as meaningful as possible. We will continue to push for an active role for civil society, as well as for the meaningful participation of everyone that is subject to the use of AI systems, especially minoritised communities.
As we move into this next phase of our work on artificial intelligence, we urge decision-makers to ensure:
- The strong and rights-respecting implementation of the AI Act at EU and national level. We call for for a strong, effective and Charter-based interpretation of prohibited and risky technologies under the AI Act; for strong and independent supervisory authorities; for rights-respecting guidelines and codes relating to interpretation and implementation of the law; and for meaningful Fundamental Rights Impact Assessments (FRIAs);
- Proactive measures to fill the gaps left by the AI Act. These are especially urgent for better protecting people on the move; addressing the environmental impacts of AI; removing blanket exemptions for the use of AI for national security purposes; making sure that all systems (not just high-risk ones) are accessible for people with disabilities; and stopping the export of rights-violating AI technologies from the EU;
- Additional national prohibitions or limitations on unacceptably harmful AI in the cases where the AI Act allows this. Most notably, we call for bans on the use of remote biometric identification (RBI) systems like public facial recognition;
- The swift stoppage of non-compliant uses of AI. The first rules that will become operational will be the Act’s list of prohibitions, which become binding from 1 February 2025. Other rules will be phased in after that, and we will follow this closely to contest uses which fall outside the bounds of the Act;
- Genuinely transparent and inclusive processes. Standards, for example, which are an important part of the AI Act’s implementation, are developed in a way that is often opaque and which privileges private entities – with a risk that this could undermine the rules in the AI Act. In addition to standard-setting, the development of codes of practice, and the advisory groups overseeing the AI Act, must be transparent and inclusive of civil society.