Why age verification misses the mark and puts everyone at risk

Age verification is a short-sighted, ineffective and dangerous way to protect young people form online harms. It is disproportionate when better alternatives are available, straightforward and rewarding to circumvent, and structurally impacts more people than it aims to protect.

By EDRi · November 25, 2025

The EU on the wrong path to protect young people from online harm

In the name of protecting young people from online harm, many lawmakers have suggested measures such as monitoring conversations, restricting their behaviour, or excluding them from online spaces altogether. Despite EDRi’s early and continued warnings, age verification keeps gaining popularity in this debate, to the point that it has been endorsed by the European Commission and by the Council. Without drastic intervention, the European Parliament could soon follow suit.

EDRi has serious concerns about age verification and the wider narrative that young people must endure invasive and intrusive measures to be protected online. As we have long advocated, the root causes of harm need to be addressed in order to make the internet safer for all its users. Age-based exclusion does not solve these root causes and merely delays people’s exposure to harm.

Restriction in the exercise of one’s fundamental rights

Children have fundamental rights, they may be restricted but only when necessary and proportionate. The restriction of children’s rights “for their own safety” comes as an odd risk mitigation measure, when there is so much room to improve online safety prior to resorting to exclusion (for instance, see the upcoming DFA and the Art. 28 DSA Guidelines – except for their age-verification elements).

EDRi, in line with the OECD and the UN’s Committee on the Rights of the Child, stands against the proposed restrictions, and emphasises that children need and deserve online spaces, where they can meet others, find comfort and safety, confront and exchange their ideas, build relationships, learn, and play. Given the potential consequences that blunt age-based restrictions will have on youngsters’ lives, it is very likely that young people will find ways to bypass them entirely, and will find this highly rewarding.

To better address their needs while respecting their rights, we need more efficient, inclusion-based measures, which would combine:

The solution to the harm sustained by children online must preserve their rights and autonomy. Children and their guardians must be empowered, but for this to be effective, structural harm must be addressed first.

Data protection, anonymity and cybersecurity

The rising number of cybersecurity incidents demonstrate the interest – both economic and geopolitical – in accessing, infiltrating, tampering with or shutting down private and public digital environments. Age verification significantly increases the attack surface, due to the increase the quality and the quantity of personal data processed.

Statistically, even those age verification tools advertised as preserving anonymity will suffer some failure, hack or leak. At the first incident of this kind, public trust will crumble immediately, increasing the chilling effect of age verification requirements.

Even the EU’s proposed temporary age verification app and eID Wallet, still under development, lack sufficient privacy guarantees. Once they are made available en masse, a lot more regulatory oversight and scrutiny will be needed to ensure that proper functioning and are not actually opening the door to pervasive mass surveillance.

In the meantime, some will comply with age verification requirements with cheap tools. As recent cases show (AgeGo, Ageverif, Yoti, and AU10TIX), such tools may leak sensitive data, be inaccurate for people of colour, be easily circumvented, or abuse their privileges to track users and share data with third parties.

Aiming for a few, impacting all

Access to the internet has become an indispensable tool for realizing a range of human rights and, as such, it should not depend on one’s ability or willingness to present any form of ID, documents, or biometrics. Age verification violates this ideal: every user would have to prove that they are not minors in order to access its benefits entirely.

The freedoms associated to an open internet are crucial for a healthy civic space. They safeguard the work of those who scrutinise power and strengthen our democracy, like journalists, whistleblowers and activists; they provide a lifeline to those the state often sidelines, including marginalised or segregated communities.

In order to identify the users belonging to the target group, every user is indiscriminately forced to undergo the process and jump through the hoops to prove they have the desired characteristic – lest they they, too, suffer the restrictions. This creates a disproportionate restriction of rights, where all users are presumed to be underage unless proven otherwise.

This adds to the fact that not everyone possesses the documents or the technological equipment necessary to prove their age. Age verification requirements would mean the exclusion of millions of people, including undocumented individuals, those without a modern smartphone, people with low digital literacy, and those who do not trust the tool.

If exclusion really were the way forward, more proportionate approaches ought at least to be considered – those whereby data processing is targeted and restricted to the age group the measure is aimed for. The Californian Digital Age Assurance Act, insofar as it minimises the amount and sensitivity of data processed, seems promising in this regard, though more work is needed to ensure that it can be compliant with people’s rights.

Where will the exclusion stop, and the inclusion begin?

Once we accept restrictions on the internet for certain groups, we open the door for others, which may target minors further or other groups deemed ‘unworthy’ of unrestricted access.

The age verification narrative in the EU began with the aim of excluding young people from porn platforms and online marketplaces offering alcohol or gambling, but is now expanding to include also social media (or features thereof, such as chat functions), chatbots and interpersonal messaging services such as WhatsApp.

Beyond this increasing encroachment on the life of young people, there is also the risk of age verification tools being instrumentalised for other means. There is a risk that the government of Hungary, for example, could use them in its crusade against LGBTQIA+ educational material, or that countries that limit abortion, like Poland, may use them to limit access to information about reproductive rights.

Creating a false sense of security

All age verification methods can be circumvented to some extent. Since children will not be allowed in age-restricted areas, there will be little incentive to ensure their safety there. This means that, once breached, children will be exposed to the whole spectrum of harms that the current narrative aims to protect them from. Conversely, instead of being alert to risks – as we all should be when communicating with people online whom we do not know – children may believe that they are among peers and so let their guard down, making them more vulnerable to grooming and other forms of exploitation.

An inflexible tool

Age verification measures need to be based on a prior assessment of which content and environments are appropriate for children of a certain age, and others are not. Whilst this evaluation may be useful, it is far from a universal standard. Two children of age 13 may have significantly different levels of maturity depending on personal developments or other factors, such as their gender, cultural practices and their socioeconomic background. Further, what a parent deems appropriate might not be shared by another. It can therefore harm children’s growth and autonomy to set a generic standard of what they can and cannot do online, enforced through age verification. It is better to make some parameters the default, and allow young people – in collaboration with a parent or guardian – to opt out where they understand the risks of doing so.

A challenge seen before

The internet is not entirely a safe space for young people (or for adults, for that matter), but this is not a reason sufficient to exclude them from it. Cycling on the road can be dangerous too, but we nonetheless have taken a different approach here: there is no minimum age to ride a bike. Instead, we legislate to adapt the environment and making our roads safer, we instruct our children on how to follow our traffic rules and we provide them protection, with helmets and bright lights, all while cycling next to them.

If this is possible for any other aspects of our life in a community, why shouldn’t it be for internet? Considerable work has already been done to ensure online spaces are safe for all. Let’s not reduce our ambitions. This means calling for an ambitious enforcement of the GDPR and of the DSA, an ambitious DFA, and a strong stance against the deregulating wave that its attacking our hard-won, rights-protecting EU digital rulebook

Simeon de Brouwer (He/Him)

Simeon de Brouwer (He/Him)

Policy Advisor