Europe shouldn’t “move fast and break things” with fundamental rights

The Digital Omnibus proposals, presented as “simplification,” risk weakening essential safeguards in the GDPR, the ePrivacy Directive, and the AI Act. By reducing protections and delaying obligations for high-risk systems, they introduce a logic reminiscent of the tech industry’s “move fast and break things” approach. In digital infrastructures built on large-scale data processing and automated decision-making, however, mistakes do not simply disappear. They become part of the system. This is why regulation is essential to protect people’s rights.

By Itxaso Domínguez de Olazábal & Chiara Casati for Tech Policy Press (guest author) · April 15, 2026

A “move fast and break things” mindset is entering EU digital regulation

For years, big tech companies have followed a simple idea: “move fast and break things.” Build quickly, fix problems later. This can sometimes work in software development, but it becomes dangerous when applied to people’s rights.

Today, personal data, often very sensitive, flows across many systems and is used to make decisions about jobs, credit, or access to services. When something goes wrong, the damage may already be done and hard to undo.

This is why the European Union created rules designed to prevent harm before it happens. Laws like the General Data Protection Regulation (GDPR), the ePrivacy rules, and the Artificial Intelligence Act require companies to build safeguards into their systems from the start. These rules focus on transparency, accountability, and risk management because fixing harm afterward is often impossible.

However, proposals, like the Digital Omnibus, risk moving in the opposite direction. Presented as “simplification,” they would weaken key protections and rely more on companies to assess themselves. At the same time, they are being developed quickly, with limited evidence and without any democratic consultation.

This creates a real risk: weakening rules is fast, but fixing the consequences can take years, if possible at all.

When harm becomes invisible

These changes may sound abstract, but they affect everyday life.
One major issue is how “personal data” is defined. Under the current rules, data that does not directly identify you by name, such as browsing behaviour, can still count as personal information and be protected if it can be used to identify you or relate to you within a wider data environment.

For example, imagine a company replaces your name with a random ID and records that “User123 looked at a pair of shoes.” On its own, this may seem anonymous. But when combined with other datasets across the online ecosystem and its various value chains, like data from advertisers or analytics data, it may still be possible to identify you.

Today, this data is protected because of that risk. Under the proposed changes, the company might claim the data is not personal simply because it cannot identify you, even if others can.

This means the same data could be protected in one situation but not in another, despite the risks being the same. The impact is significant for online tracking: websites routinely monitor users via cookies and device identifiers. If these are no longer consistently classified as personal data, much of this tracking could fall outside regulatory safeguards, despite still observing user behaviour. In practice, individuals could be tracked across sites and apps, profiled in detail, and targeted or influenced based on those profiles, without the protections intended to limit such practices.

More automated decisions, fewer safeguards

The proposals also weaken protections around automated decision-making.

Today, decisions made by algorithms, such as whether you get a loan or access to benefits, are treated as high risk and are allowed only under strict conditions. The new approach would make it easier to justify these systems as “necessary,” even when human alternatives exist. In practice, this could make automated decisions the default.

For example, a bank could rely on an algorithm to decide whether to approve a loan. Even if a human review exists, it might become a formality rather than a real safeguard. For individuals, challenging such decisions becomes harder precisely because the system is built to rely on automation from the start.

A recent example shows how this shift already works in practice. In 2025, Meta announced it would use user data from its platforms to train AI systems based on “legitimate interest” instead of asking for explicit consent. Most people were not aware this was happening. Opting out required navigating complex steps, and civil society groups and in some cases regulators, had to step in to help people exercise their rights. In practice, this meant that many users ended up contributing their data without ever making a clear or informed choice.

The proposed rules would make this approach more common. By explicitly linking AI training and operation to “legitimate interest,” they risk creating a presumption that such uses are acceptable, even at very large scale.

In everyday terms, it means that your data could be used continuously, behind the scenes, in ways you do not fully understand. And while you may still have the right to object, using it could be so difficult that, for most people, it barely works in practice.

Less for worse

The proposals also weaken protections for sensitive data, such as health information or political views. Companies could argue that removing such data from AI systems is too difficult or costly.

In practice, this means sensitive data might continue to be used, not because their use is justified, but because they have already been embedded in complex systems and are difficult to remove.
The same logic applies to the AI framework. One proposal would delay obligations for high-risk AI systems.

In practice, companies are still allowed to build and launch AI systems, even in high-risk areas like hiring, credit scoring, or public services. But if the legal obligations are delayed, they don’t yet have to comply with key safeguards designed to assess risks, document system performance, and enable accountability.

With this gap, companies can roll out systems without fully complying. The result is that AI tools affecting important life decisions – like whether you get a job interview or a loan – could already be in use before safeguards kick in. By the time the rules finally apply, these systems may be deeply embedded, widely used, and much harder to fix or remove.

A shift toward self-regulation

Taken together, these changes shift responsibility away from clear legal safeguards toward company self-assessment.

This creates a fundamental problem: most people, even regulators, have little to no visibility into how their data is collected or used. Without access to information, harmful practices are difficult to detect and even harder to challenge. Moreover, individuals are expected to detect and challenge harmful practices after the fact, often without the information or resources to do so.

In this environment, those with the most resources benefit the most. Large tech companies have the legal expertise, technical knowledge, and capacity to navigate uncertainty. Individuals, smaller organisations, and public interest groups are left trying to keep up.

Simplification, in itself, is not the issue: clearer rules and less bureaucracy could help everyone. But here, simplification risks becoming a cover for removing protections. And when that happens, the main beneficiaries are the very companies that built their success on a “move fast and break things” approach.

So the question is no longer abstract: do we want our fundamental rights to be treated the same way?

This article was originally published on Tech Policy Press (insert link)

Itxaso Domínguez de Olazábal (She/Her)

Policy Advisor

Chiara Casati (she/her)

Communications and Media Officer