AI Omnibus: Reject the proposals to undermine transparency in the AI Act

The European Commission’s dangerous and misguided Digital Omnibus proposal includes a dangerous rollback of transparency requirements in the AI Act. 60 civil society organisations, independent public authorities and individuals, including EDRi, urge EU lawmakers to reject a change that would risk weakening enforcement, legal certainty, and the protection of fundamental rights, while offering negligible benefits for companies.

By EDRi · February 11, 2026

Removing Article 49(2) would only weaken enforcement and fundamental rights protection

On 19 November, the European Commission presented its proposal for a Digital Omnibus on AI Regulation (“AI Omnibus”), part of the broader Digital Omnibus package. As the legislative process continues, the European Parliament has been invited to submit amendments on the AI Omnibus by 13 February. Given the dangerous procedural deficit in the process to prepare this package, as well as the lack of evidence about how the almost brand-new AI Act is functioning, we think the co-legislators should reject the AI Omnibus entirely. Failing that, there are crucial improvements that must be made.

60 civil society organisations, independent public authorities and individuals, including EDRi, have sent a joint letter to Members of the European Parliament, representatives of the EU Member States, Executive Vice-President Virkkunen, and Commissioner McGrath. In the letter, we urge them to uphold the integrity of the AI Act and reject the proposed deletion of Article 49(2), which establishes crucial transparency safeguards for high-risk AI systems under the AI Omnibus proposal.

This provision is one of the most troubling elements of the AI Omnibus, because it would exempt AI providers from registering their systems in the EU database if they claim those systems are not high risk. This registration requirement is one of the AI Act’s most important transparency safeguards. Removing it would risk transforming the AI Act into a form of self-regulation, placing trust in the discretion of developers of potentially harmful AI systems.

Such a change would significantly weaken enforcement, undermine legal certainty, and ultimately erode the protection of fundamental rights that the AI Act is designed to guarantee. Without mandatory registration, regulators would face greater challenges in identifying, monitoring, and addressing risky AI deployments.

Crucially, the benefits of eliminating this obligation are minimal. According to the Commission’s own estimates, companies would save approximately €100 by avoiding registration, an amount that does not meaningfully enhance competitiveness. Instead, this change would hollow out a core pillar of the AI Act and risk turning it into an optional, compliance-light framework.

Finally, we emphasise that whilst this is one of the most notable issues with the AI Omnibus, there are several other serious problems that should be fixed. This includes rejecting the expansion of SME privileges to large companies, preserving the powers and independence of Article 77 bodies (e.g. national human rights bodies) and refusing the proposed delay in the AI Act’s application. Anything less than this will drastically weaken the EU’s AI Act, taking the teeth out of a law that is supposed to protect human rights and safety from unscrupulous and harmful uses of AI.