Age verification gains traction: the EU risks failing to address the root causes of online harm

Narratives around age verification and restriction of access for minors are gaining traction in the EU, amid similar efforts being pursued in the UK, US and Australia. This blog analyses different EU policy files and warns that relying on age-gating risks undermining more holistic, rights-respecting and effective solutions to online harm.

By EDRi · September 2, 2025

Age verification: a blunt tool for online safety

Lawmakers and online platforms are increasingly turning to age verification tools in their search to make the digital environment safer for young people. But these blunt ‘solutions’ can overshadow a range of other tools in the online safety toolbox that are more meaningful and effective. This overly-narrow focus on age-gating obscures the fact that the systemic design choices at platform level are the root cause for these harms, affecting children and adults alike.

With daily headlines about pro-eating disorder content, cyberbullying and screen addiction, lawmakers and parents are rightly preoccupied with how to address these serious problems. Age verification has become a seemingly low-hanging fruit for which there is much political appetite. This is a clear example of technosolutionism: the belief that complex social problems can be quickly fixed by technology.

Yet, widespread age gates do nothing to address the deeper issues of platform design: toxic layouts, addictive patterns, and environments that fuel harassment remain untouched. The real risk is that once young people cross the minimum age threshold, they are simply thrust into equally toxic and addictive spaces, where they continue to be exploited for platforms’ economic gain. As a result, the harmful systems persist.

Age verification is a form of exclusion, not empowerment. It disregards the evolving capacities, agency and autonomy of young people. This one-size-fits-all approach risks impeding children’s development, who will see their possibilities to exercise their rights to free expression and access to information restricted or outright denied – an approach which many child rights organisations have long warned against.

Moreover, mandatory verification will further entrench inequalities. It excludes and discriminates against – young and less young – those who do not have documents, access to technology, or the digital literacy required to undergo the age verification process.

This appetite for technological shortcuts is increasingly shaping policy. Two policy developments at EU level have recently brought age verification to the fore of the European policy agenda: the Schaldemose report on the protection of minors online, and the Commission’s guidelines under Article 28 of the Digital Services Act. These files illustrate how the debate on children protection online risks becoming dominated by quick fixes, rather than structural reform.

The European Parliament’s report on the protection of minors online

The push for age verification recently took centre stage in the European Parliament with the Schaldemose report on the protection of minors online. Intended to influence the forthcoming Digital Fairness Act (DFA), the report – and named for its author, the Member of Parliament (MEP) Schaldemose (S&D) – calls for a unified EU-wide solution for age verification that cannot be circumvented easily. It sparkled debate, with hundreds of amendments tabled by other MEPs (1 to 319 and 320 to 471) to this draft report, which MEP Schaldemose will now analyse and seek to find compromises.

Here is how each political group, in order of political weight, positioned itself on age verification, as reflected in the amendments tabled by their active MEPs:

  • Worryingly, the conservative group, EPP, unequivocally wants not only mandatory age assurance on devices, app stores, social networks and online services (amendments 147, 172, 182, 187, 189, 192, 196, 198, 202). They also calls for mandatory identification of users, which could endanger online anonymity relied on by journalists, dissidents and others (148, 229).
  • The group of social democrats, S&D, is divided. Some MEPs are calling for mandatory age limits, implemented through mandatory age verification (173, 215, 181), whereas others highlight that such measures area not always proportionate (116, 185, 211).
  • The far-right group Patriots for Europe wants a magical solution, finding age verification acceptable as long as as such tools do not restrict freedom of expression/information (184), do not lead to surveillance (228), and are implemented by decision of their national government rather than at EU level (221).
  • Similarly, the right-to-far-right group ECR support the implementation of age verification only where proportionate (183, 218, 220, 225, 231, 255), while emphasising the need to preserve privacy and anonymity (230), and that parents should be able to override restrictions (266).
  • The group of center-right liberals, Renew, unequivocally calls for mandatory age limits enforced through age verification, failing to consider the risk that such tools pose (175, 178, 189, 196, 198, 202).
  • Both the Greens/EFA and the Left groups reject mandatory age verification (137, 143, 150, 165, 167, 180).

While no Parliamentary majority emerges from these amendments in favour of always-on, mandatory age verification, there is also no majority against the idea of relying upon exclusion as a means to protect youth. Unfortunately, the Schaldemose report is likely to entrench age verification as a politically-acceptable measure despite its many shortcomings. This will incentivise the Commission to prioritise superficial ‘fixes’ in the Digital Fairness Act at the expense of tackling the real structural problems. The key question now is where — if anywhere — age verification can truly be considered proportionate.

The DFA is not the place to mandate widespread age verification, which is already sufficiently regulated in other legal frameworks. More importantly, online protection should not be fragmented: everyone deserves an equally high level of protection online. What’s more, young people are not only harmed when they go online, but they are harmed by the online environment, whether or not they are connected to it. Toxic content and manipulative platforms shape their peers, families, schools, and cultures: advertising and profiling systems target them indirectly through others, default designs normalise surveillance, addiction and commercial exploitation as acceptable standards beyond consumption.

What we need is a systemic response. By ensuring that digital services are fair and safe by design for all users, we would create an internet that protects minors meaningfully, without isolating them or pushing them into riskier and more opaque spaces. The DFA is our opportunity to address these harms holistically , moving beyond the false promise of mass age verification and restrictions.

The European Commission’s guidelines on the privacy, safety, and security of minors under the DSA

On 14 July 2025, the Commission published guidelines legally interpreting Article 28 of the Digital Service Act (DSA), which states that online platforms must put in place “measures to ensure a high level of privacy, safety, and security of minors”.

While we welcome the guidelines’ recommendations on platform design, default settings and functionalities, it is worth noting that social media platforms had committed to similar measures already in 2009 under the Commission’s supervision. Yet, over the years, platforms have done little to address the harm they continue to inflict or expose people, for which they remained notorious. Given the renewed political appetite to protect people online, it is crucial to use the momentum to achieve meaningful measures rather than problematic quick-fixes such as age verification. There is a serious risk that profit-seeking platforms will see age-gates as a self-sufficient solution – claiming that if no children are present, no harm can occur – because such measures are far easier to implement than changing harmful designs and exploitative business models. We cannot allow them to evade deeper responsibility.

The Commission finalised the guidelines after two public consultations, for which we shared inputs (to be found here and here). In its first draft, the Commission was cautiously positive about the use of age assurance, including age verification, while warning against the risk of disproportionate reliance on it. However, although the Commission’s summary report states that “the overall trend [in the feedback received] leaned in favour of disagreeing with the appropriateness of age assurance measures,” the final version of the guidelines nonetheless further entrenched the widespread reliance on such tools – a 180-degree change that came as a big shock to us. In its final version, the Commission stated that age verification is proportionate and appropriate where “due to identified risks to minors, the terms and conditions or any other contractual obligations of the service require users to be 18 years or older to access the service, even if there is no formal age requirement established by law.”

In another worrying move, the Commission also decided to endorse Member States’ authority in setting a national minimum age for social media, permitting them to control platform access through the deployment of mandatory age verification.

Where does this leave us?

The two worrying developments outlined above indicate that age verification is increasingly being framed as a legitimate and proportionate means of protecting children. We will continue highlight that age verification is a short-sighted measure: it does not help young people navigating online spaces, it can disproportionately limit children’s rights, is invasive, excludes a large portion of the population beyond children, and ultimately lacks effectiveness due to easy circumvention.

Child protection organisation ECPAT has raised concerns about the over-reliance on age verification, explaining that: “A child’s right to safety online can never be solved by implementing age assurance technology on selected websites or platforms. […] [I]n many instances, a better solution would be to adapt the website or platform for all users, including children.” This perspective is shared explicitly by other organisations, such as 5Rights and Amnesty International, and mplicitly supported by groups like COFACE Families Europe and the Child Rights International Network. We will also champion this approach, as it eliminates the need for age gates that often exclude young users.

We will continue advocating for an empowering, caring and rights-respecting notion of safety, as opposed to one based on surveillance and restrictions. The future DFA must prioritise redesigning online spaces to make them safer for everyone, rather than simply restricting access for minors. If we want to truly tackle a problem, we should address its root cause, not its symptoms.

Simeon de Brouwer (he/him)

Policy Advisor