How can the EU protect children online while dismantling the very rules designed to keep them safe?

Protecting children online has become one of the most powerful political narratives in Brussels, yet proposals like the Digital Omnibus risk weakening the very safeguards that make this protection possible. This is a contradiction: can children truly be protected if the rules designed to keep them safe are being dismantled?

By EDRi · April 15, 2026

A political priority: protecting children in the digital environment

Across the EU, protecting children online has become a central political priority. European institutions and policymakers increasingly frame digital policy around the need to create safer online environments for minors, as illustrated by initiatives such as the European Commission’s Special Panel on child safety online.

However, this growing focus also reveals a deeper contradiction.

During the previous Commission mandate, the EU adopted a more structural and rights-based approach, including through the Better Internet for Kids (BIK+) strategy. Although this approach raised important concerns in its implementation, it placed greater emphasis on safer digital environments, digital literacy and children’s participation. At the same time, major regulatory frameworks were introduced to protect minors online, including the Digital Services Act (DSA), the Artificial Intelligence Act (AI Act), and the General Data Protection Regulation (GDPR). These frameworks are based on a key principle: children’s rights online depend on how digital systems are governed.

Yet, current policies are increasingly shifting away from this approach, favouring restrictive technological measures while weakening core safeguards.

Deregulation risks weakening the safeguards children rely on

Ahead of the launch of the Digital Omnibus proposal, a “simplification” package that in practice amounts to deregulation of key EU digital rules, a coalition of civil society organisations and experts warned that it could undermine children’s rights online. In a joint letter, networks representing over 300 organisations called on the EU to ensure that simplification efforts do not weaken existing protections.

Children are especially exposed to data-driven harms, including profiling, behavioural targeting, and manipulative platform design. These practices rely on the intensive use of personal data and algorithmic systems designed to maximise attention and keep people connected for as long as possible. This is why data protection and platform regulation are part of the infrastructure that protects minors.

The Digital Omnibus leaves this call unheard. Proposed changes to the definition of personal data could allow certain pseudonymised datasets to fall outside the scope of the GDPR. In practice, this would make it easier for companies to reuse large datasets for profiling or training AI systems without applying key protections.

A teenager who searches for content related to body image, mental health, or identity could be persistently profiled and funnelled into increasingly narrow content streams. This can reinforce harmful narratives, amplify insecurities, or expose them to manipulative advertising.

Other suggested changes would weaken safeguards around automated decision-making. These rules currently limit the use of algorithms in decisions that significantly affect people’s lives. If diluted, automated systems could become more widespread in areas such as education platforms, content moderation, or access to digital services used by minors.

Further concerns arise from proposed changes to the treatment of sensitive data. New derogations could allow sensitive information, such as data revealing health status, political views, or sexual orientation, to remain embedded in AI systems where its removal is deemed “disproportionate.”

In practice, this creates environments where deeply personal characteristics can be inferred, stored, and reused without individual people having meaningful consent or awareness over how such data are retained and used. For example, a young person exploring questions about their identity may unknowingly generate data that is later used to categorise them, target them, or expose them to specific types of content or messaging. Where safeguards on purpose limitation and data reuse are weakened, this type of profiling can expand across contexts and become harder to detect or limit.

These risks are particularly evident in education. Digital learning platforms increasingly rely on behavioural indicators to categorise students and personalise their teaching. Without strong safeguards, these systems can entrench inequality rather than reduce it. Where data can be reused more broadly and safeguards on automated decision-making are weakened, a child from a disadvantaged background, perhaps sharing a device or experiencing unstable internet access, may be misclassified as inattentive or low-performing. Over time, this can lead to fewer opportunities, lower expectations, and self-reinforcing outcomes.

Quick technological fixes risk harming everyone, including children

Policymakers are also turning toward flashy technological fixes presented as solutions. Yet, these measures frequently do not address the underlying causes of harm.

One prominent example is the growing focus on age-gating online services. While often framed as a simple way to shield minors from harmful content, these systems frequently and bluntly prevent young people from exercising their rights altogether, or require them to submit identity documents, biometric data, or other sensitive information to access systems supposedly tailored to them.

This creates a paradox: in order to be “protected,” children may be required to expose even more personal data.

At the same time, age-gating shifts responsibility away from platforms and onto their users. This means that Big Tech is let off the hook for the harm it causes onto young people, but also legitimising the same harmful practices when they prejudice people who are over the age threshold, even by a single day. Further, age-gating also fails at meaningfully protecting young people, who are faced with an easy choice: circumvent these systems and enter spaces now designed exclusively for adults with even fewer safeguards than before, or be excluded entirely, and fail to learn how to navigate some risks, build digital skills, and participate in public life.

For many young people, the internet is a space of connection, identity, and access to information. It can provide communities they cannot find elsewhere, as well as resources essential to their well-being and development.

While structural safeguards aim to make digital environments safer for everyone, the sort of technological quick-fixes that some lawmakers seem to prioritise often operate by limiting access or increasing surveillance, without addressing why those environments are unsafe in the first place.

Protecting children requires stronger digital safeguards, not weaker ones

Instead of regulating the systems that create harm, current debates increasingly focus on controlling users themselves. If the rules governing digital systems are weakened, the risks do not disappear, they are simply shifted onto those least able to manage them. Children are not passive users of technology, but they are among the most exposed to its harms: weakening the structural safeguards of the EU’s digital framework only leaves them more vulnerable.

Rather than excluding young people or relying on superficial technical measures, policymakers should strengthen the structural safeguards that shape digital environments. This means enforcing robust data protection rules, ensuring platform accountability, and addressing systemic risks at their source. Upcoming initiatives, such as the Digital Fairness Act, provide an opportunity to tackle manipulative design practices and exploitative forms of profiling that disproportionately affect young users.

Children’s safety cannot be achieved by excluding them or placing the burden on their shoulders. It depends on building a digital environment that is safe, fair, and rights-respecting for everyone.

Itxaso Domínguez de Olazábal (She/Her)

Policy Advisor

Simeon de Brouwer (He/Him)

Simeon de Brouwer (He/Him)

Policy Advisor