Last week, a public consultation by the Irish media regulator, Coimisiún na Meán, on its Online Safety Code resulted in criticism from civil society organisations and tech experts regarding its requirements for dangerous age verification obligations.
This trend of gating access to the internet in response to concerns about illegal and “harmful” content online, and specifically the Irish media regulator’s Online Safety Code, worry privacy experts like us.
The proposed technological “solutions” cannot resolve what are actually much more complex societal issues.
However, policymakers worldwide are failing to acknowledge the threat of surveillance and the risk to everyone’s privacy online posed by the use of these systems.
Contrary to what the Irish regulator says, experts have shown that tools like age verification can bring more harm to the very children that the code aims to protect.
Quick tech solutions will never ensure digital safety
The United Nations and UNICEF both emphasise that children have rights to freedom of expression and access to information online.
Like adults, children use the internet for education and entertainment purposes, for exploring their identity, sexuality, or for engaging politically.
So, measures like the ones proposed by the Irish Online Safety Code could result in limiting or controlling young people’s access to legitimate online services.
A 2023 survey showed that 56% of young people in Europe consider their anonymity crucial for their activism and for organising politically among peers. Age verification tools rely on harmful mass data gathering that threatens the privacy and security of everyone.
Therefore, such quick tech solutions will never ensure digital safety. Yet with widespread age verification, young people could be discouraged from this sort of democratic participation online — or even excluded from it entirely.
For those that cannot or do not want to submit their sensitive data, they can be locked out of digital spaces.
EDRi, along with many other civil society organisations, has been fighting biometric surveillance in the EU Artificial Intelligence Act and profiling of children’s behaviours online in the EU Digital Services Act.
Yet with this new binding code, the Irish media regulator is preparing to force many large tech companies to process this sensitive data on a huge scale, to predict people’s ages.
With many of these companies under the jurisdiction of Ireland, the impact of this code will be felt across Europe, affecting millions of people using services like Instagram and YouTube.
Countries across Europe are jumping on the bandwagon
Ireland is not the only country where states are trying to force the mass use of age verification without considering how invasive and risky these tools themselves can be.
In January, the Council of Ministers in Spain approved a plan that aims to protect children from seeing pornographic content online.
However, to do so, the proposed app will require the scanning of biometric data and the collection of personal details like passport data. This would be technically very complicated and requires trusting that no additional information is shared.
There is a mountain of evidence showing that large tech companies and states cannot be trusted with handling people’s most private data and with looking after their digital safety.
The UK is also exploring technological solutions to verify the age of those accessing certain web pages. In Italy, parental controls have been imposed on children’s mobile devices.
And Belgian lawmakers have recently endorsed the EU’s strategy for children’s safety online, which includes a push for more age verification.
We all want to feel safe and we want that for our children and younger siblings. There is a lot of harmful content being shared online that finds its way to our devices thanks to the toxic recommender systems that define the business model of the few large tech companies that dominate the online space.
However, focusing too much on age verification may lead to ignoring the root problems that facilitate or exacerbate online harm, highlighting a wider trend of states rushing to accept laws that put trust into flawed technology and in the hands of profit-driven companies.
Violating existing rights won’t help anyone
The issue of online safety is not as simple as the creators of age verification tools claim. Research from our network has found that all common forms of age verification have serious drawbacks for privacy and security.
Gating access to the internet can disempower young people and their parents from making informed decisions about what to view while giving large tech companies even more power to decide what we can and cannot see online.
It is so important that social media and other platforms take an approach that protects everyone’s personal data, privacy and autonomy online. Alternative solutions involve making profiles private by default, offering child accounts, putting content warnings on potentially sensitive content, or asking users to solve a puzzle which a young child would be unable to solve are all less invasive ways of contributing to safety online.
When combined, they can still be effective — whilst avoiding the sledgehammer approach of age verification tools.
Under rising pressure to tackle risks to children’s mental health and well-being posed by internet (mis-)use, lawmakers are increasingly turning to the allure of age verification.
However, the booming age verification industry in the EU alone is set to be worth €4 billion by 2028. Lawmakers must not ignore the clear financial interests of those developing and selling these tools and the conflict of interest they might create for stakeholders involved in the political debate.
Instead, lawmakers must take a holistic and careful approach to online safety. Whilst there are of course risks, the internet also has fantastic benefits for young people — learning, connecting with others, and discovering themselves.
Preventing young people from being able to engage in these opportunities would be a huge loss — and would also harm adults, and the democratic future we are trying to build. We need online anonymity for journalism, whistle-blowing, accessing reproductive and LGBTQ+ healthcare and more.
The EU already has strong privacy and data protection laws, which most current age verification methods fail to respect. This is deeply concerning because privacy and data protection online are a way of ensuring safety: violating these rights won’t make the internet safer.
Nor will it create the kind of resilient, empowered adults of tomorrow that we all want today’s children to become.
This article was first published here by Euronews.
Contribution by: Ella Jakubowska, Senior Policy Advisor, and Viktoria Tomova, Communications and Media Officer, EDRi.