Age against the machine: the race to make online spaces age-appropriate

The race is on to make online spaces age-appropriate, but children’s best interest is no Olympic sport. While the internet was not designed with kids in mind, children, teens and young adults are now spending more time online than ever. Parents use video-sharing platforms to show cartoons to their toddlers, while kids and adolescents play online games, engage in social media, learn through online modules, and fashion their identities through their online activities.

By EDRi · September 4, 2024

The race is on to make online spaces age-appropriate, but children’s best interest is no Olympic sport. While the internet was not designed with kids in mind, children, teens and young adults are now spending more time online than ever. Parents use video-sharing platforms to show cartoons to their toddlers, while kids and adolescents play online games, engage in social media, learn through online modules, and fashion their identities through their online activities.

This may mean that children and young people are also at risk of being exposed to online harms. Yet keeping them out of the online ecosystem might do more harm than good. These risks should not give states and industry players the right to use privacy-invasive tools under the guise of protecting children, with one quickly gaining popularity-age verification technology.

The rush for a quick fix

In an attempt to make systems that are not designed for minors safer, we see a political rush to pass regulations that resort to drastic measures to cut children’s internet access and undermine the autonomy of children and their parents in one fell swoop. France’s President Emmanuel Macron proposes barring access to social media for those under 15 years old and even banning smartphone access for those under 11. The Kids Online Safety Act poised to regulate social media’s impact on those below 17 years old was recently passed by a whopping majority in the US Senate. Youth groups and digital rights groups were front and center in protesting the Bill, declaring it ‘dead’ after oppositions from the Republican caucus, and citing its potential to censor transgender and reproductive rights content (among others) by labeling them as ‘harmful to children’.

Across these proposed regulations, age verification is presented as the ‘silver bullet’ to shield minors from pornographic content, gratuitous violence and gambling, to curb exposure to grooming, cyberbullying, and targeted ads, and many other risks they should not be subject to. In the European Union’s (EU) Digital Services Act (DSA), all major online platforms are required to ensure a high level of privacy, safety and security for minors. The DSA requires online platforms and services to put in place appropriate and proportionate measures, and prevent them from presenting ads based on profiling of minors. While the obligation to put age verification systems in place is not explicitly stated, the million-dollar question (quite literally for the age verification industry) remains: how will they ensure DSA compliance if they do not know the age of the user?

Risks with age verification

Age verification technology, however, are not the optimal solution they seem to be. Many digital rights advocates, academics and lawmakers have raised, and rightfully so, their concerns with the threats to privacy, data protection and the right to free expression and association such tools pose to children and adults alike.

Based on an assessment of EU human rights law, EDRi previously published a position paper stressing how widespread age verification systems are unlikely to meet the principles of necessity and proportionality, since measures such as document-based age verification and age estimation are disproportionately intrusive. The introduction of online age verification infrastructures can lead to significant risks to children’s rights, and exacerbate social inequalities by barring access due to, for instance, a lack of official documents. This is especially important to note since other potentially effective measures exist, such as privacy and safety by design, and less invasive tools like age self-declaration.

Most existing age verification tools are also incompatible with the General Data Protection Regulation (GDPR). The process of having to identify users as children creates the dilemma wherein the procedures employed to identify minors may in themselves entail the processing of personal data, especially of minors (1). There is also no guarantee of anonymity, especially when third-party certification is involved.

Other risks age verification systems may bring include limiting children’s freedom of expression and participation, as well as access to information that might be beneficial for them. Through ‘age-gating’ or keeping children out of online spaces, children might find themselves unwillingly cut off from peers, services, learning modules and even from their own content.

Summer prototypes on the roll

Age verification technologies pose serious threats to our digital rights, more so when accuracy relies on the high amount of personal data collected (a real GDPR tension). Yet governments, industry and stakeholders are scrambling to find the proper ‘rights-respecting, privacy-preserving, accurate’ age verification system.

Spain recently completed the design phase of the Cartera Digital Beta, a mobile app-based age verification system that generates 30 one-month validity keys for use on pornographic sites. Keys are generated based on on verified proof of legal age, using the Spanish National eID (DNI electrónico), the FNMT Digital Private Individual Certificate, or the Cl@ve identification system for administrative services. Despite being well-intentioned, Spain’s digital wallet has been met with criticism for its tendency to invade users’ sexual privacy, the cybersecurity risks of centralising information, and the attractiveness of black-market access to credentials. Earning the nickname ‘pajaporte’ (“masturbation passport”), the Cartera Digital Beta is set to be rolled out on a massive scale by the end of summer. It potentially foreshadows what could happen with the EU Digital Wallet under the eIDAs reform (Regulation on electronic identification and trust services), where a wallet containing all digital documents for user identification could become the basis for widespread age verification.

Aside from Spain’s Cartera Digital, other prototypes claiming to be ‘privacy-preserving’ that are worth monitoring in the coming months include Fraunhofer Institute’s (Germany) app-based prototype commissioned by the Ministry of Family Affairs, CNIL’s (France) token-based cryptographic proof of concept, and euConsent’s AgeAware App. The DSA has also prompted the European Commission to establish a Task Force on age verification and launch a call for evidence for drafting EU-wide guidelines for the protection of minors, to be adopted by early 2025.

Conceptual challenges in categorising ‘children’

What should we make of this fine-tuning of the ‘silver bullet’ that sweeps digital rights under the rug? The industry’s and Big Tech’s concern that policies are ‘hard to translate into product design’ can mean evading accountability, thanks to the fact that concepts like ‘risk’ and ‘the best interests of the child’ are ambiguously defined and open to (ideological) interpretation.

At the core of this conceptual issue is the blanket categorisation of ‘children’ or ‘minors’ as one homogeneous block. While anyone under 18 is considered a child under the United Nations (UN) Convention on the Rights of the Child, age is never just a number. The various stages of development, which involve physical, cognitive, communication and socio-emotional development, are not the same across all stages of life. It may also vary socio-culturally, and socio-economically depending on the child’s or adolescent’s environment.

For instance, a child of five who starts developing likes and dislikes is not the same as a child of ten who is starting to develop social skills and personal interests. And this is nowhere near that of an adolescent whose interests now lie in exercising autonomy, exploring their sexuality and developing their own opinions—nuances and complexities that are to be lost if they are all to be categorised as ‘children’ who are ‘vulnerable’, ‘at-risk’, and in need of protection.

The same can be said of several other categories related to mitigating measures, for instance ‘parental control’. Children and young adults do not belong to a standardised parental or guardianship situation. Belonging to an unsafe domestic situation, coupled with the need to depend on parental consent in order to access social media, is an example of how online systems could exacerbate offline harms, while being packaged as serving the child’s best interest.

Age-appropriate’ as a balancing act: a case for re-design

Mitigating measures such as age verification aim to protect children and young people through exclusion rather than empowerment. If left to the hands of the state and/or industry, it can mean more consolidated power on who gets to decide what we access, when, how and at what costs. While children and adolescents can be locked out of online spaces through these measures, adults who are unable to verify their age, who do not have access to documents, who risk being identified (such as journalists, sex workers and activists), or whose ages have been wrongfully verified can also potentially be shut out.

Instead of it being a sprint, making online spaces age-appropriate is more of a balancing act. It requires careful consideration of what is at stake in adopting measures, and a careful balance between protection and empowerment, privacy and efficiency. This should not be framed as a choice between fundamental rights and child protection. What benefits children and young people – such as ensuring the highest level of privacy, safety and security in online platforms – ultimately benefits everyone.

Given that the online environment was not designed with children and young people in mind, perhaps the best way to go about it is to re-design the ecosystem and build a ‘Future Internet’ where protection is not on the basis of exclusion or invasion of privacy, but around care, inclusion and ethical design. Age verification is but one small piece of the puzzle belonging to an ecosystem of solutions that includes privacy and safety by design and by default, increased media and digital literacy for all (and not just children), prioritising systemic regulation over content regulation, and many other, less invasive solutions that prove to be more effective when combined.

Online harms reflect the brokenness and complexity of offline social structures. Making online spaces safe for children and young people should be a case for making them safe for all. We should reflect upon what it is in our society that puts children, youth, adolescents and all of us at risk online and offline, and find systemic and structural solutions, instead of simply pitting our age against the machine.

(1) Nash, Victoria, Gate-Crashers? Freedom of Expression in an Age-Gated Internet (November 24, 2020). Duff, A. (ed.), (2021) Research Handbook on Information Policy. Edward Elgar Publishing: Cheltenham., Available at SSRN: https://ssrn.com/abstract=4208181