Irish Media Regulator must address dangerous age verification in its new online safety code

On 30 January 2024, EDRi submitted its comments on the Irish Media Regulator’s (Coimisiún na Meán) new Online Safety Code in a public consultation, highlighting significant concerns about age verification.

By EDRi · January 31, 2024

On 30 January 2024, EDRi submitted its comments on the Irish Media Regulator’s (Coimisiún na Meán) new Online Safety Code in a public consultation. While the Code offers an important opportunity to fight against toxic recommender systems, as pointed out by the Irish Council for Civil Liberties (ICCL), the EDRi network, representing over 50 NGOs across Europe, also wants to highlight significant concerns about age verification.

Why is age verification dangerous?

Age verification is the process of predicting or confirming an individual’s age. In recent years, governments have been increasingly pushing for the implementation of age verification tools based on claims of ensuring people’s safety online, especially so of children. This trend is especially visible in the EU’s Child Sexual Abuse Regulation (CSAR) debate, and we also see attempts in the UK and Spain for mandatory online age verification in certain contexts. 

Experts like EDRi have repeatedly shown that quick tech solutions like age verification online will not ensure digital safety. We’ve been fighting biometric surveillance in the Artificial Intelligence Act and profiling of children’s behaviours online in the Digital Services Act. Yet with this new, binding Code, the Irish Media Regulator is preparing to force many big tech companies to do just that, in order to predict people’s ages. 

This superficial approach fails to recognise the threats posed by these invasive systems. Many such tools rely on toxic mass data gathering that threatens the privacy and security of everyone. As pointed out in EDRi’s position paper on age verification, focusing too much on age verification may lead to ignoring the root problems that facilitate or exacerbate online harm. It is essential to develop a holistic approach that prioritises privacy and safety by default and by design. 

Read the paper

What does the Irish Online Safety Code propose?

The draft Code currently suggests that any service with a minimum age for opening an account must “implement effective measures to detect under-age users and close their accounts”. Seeing this requirement through the General Data Protection Regulation (GDPR), it means that the Code would amount to an obligation to use age verification, age estimation or another form of “detect[ion] of underage users” for practically all video-sharing platform service providers based in Ireland. Given the number of tech giants registered in Ireland, such a decision will have a wide impact across the European Union, requiring age verification for all users of Instagram, YouTube and more.

What can be done to ensure safeguards for people in the Code?

Therefore, it is imperative that the Irish Media Regulator takes into account EDRi’s submission as the current Code raises serious questions about whether mandatory age verification can be considered proportionate, and whether systems are effective enough to meet the requirement of necessity. 

Here are the main recommendations EDRi has sent to the Regulator in the event that despite our advice, they go ahead with mandatory age verification measures:

  1. At a minimum, the potential risks and harms of age verification and estimation methods must be explicitly listed in the Code, and providers required to address each one. 
  2. The Code should allow providers to rely on age self-declaration if bolstered by other privacy and security by design and by default measures.
  3. If no technologies are available which meet these thresholds, the service provider must not be obligated to implement age verification or estimation measures.
  4. The Code should stipulate that any age verification or estimation system must:
    • Permanently prevent any linking of the internet activity or history of a person to their identity, ensuring that a person cannot be traced by the use of the system (i.e. ‘zero knowledge’);
    • Not provide any information to the provider other than a yes/no about their age threshold; and must not facilitate any access to the person’s account or information by the provider or by a parent, guardian or other actor; 
    • Consider using tokens instead of storing personal data, and delete personal data processed for the purpose of generating the token immediately afterwards;
    • Not allow any data collected or processed to be used for any other purpose, commercial or otherwise;
    • Not allow the processing of biometric data;
    • Be robust and secure from a cybersecurity perspective;
    • Be consensual, and not overly burdensome for those who do not want or do not have the means to verify their identity in this way;
    • Ensure genuine alternatives for those that do not have formal identity documents, ensuring that minoritised or vulnerable people will not be locked out of the internet;
    • Be mindful of a potential chilling effect, in particular ensuring that access to educational and health (including reproductive health) material is not subject to age verification, which could have a chilling effect on whether or not children feel comfortable accessing this information.

To see EDRi’s full analysis, check out the submission below.