DSA vs. Reality: Are children safer online?

How can social media be safer for people of all ages? During the hearing held in the European Parliament on 24th February, civil society experts led by Panoptykon debated possible solutions with Members of the European Parliament (MEPs) and officials from the European Commission. Yet YouTube, TikTok, and Meta dodged having to answer difficult questions.

By Panoptykon Foundation (guest author) · March 18, 2026

How far are we in this fight in 2026?

The Digital Services Act – the EU regulation adopted in 2022 – was intended to address multiple risks posed by social media platforms to both children and adults. For the first time, through this legislation, the EU forced the biggest platforms to assess systemic risks, limit harmful design, and protect children. On paper, it was a real breakthrough. In practice, everybody, including the Commission, knew that it will take years of investigations, public pressure, and bold political moves to force very large platforms (VLOPs) to comply.

On invitation of the Members of the European Parliament (MEP) Kamila Gasiuk-Pihowicz (EPP) and Irena Joveva (Renew), Panoptykon co-hosted a hearing in the European Parliament – “Protecting children online: Europe at a crossroads”. Together with MEPs from three major political groups and officials from the European Commission, we debated with civil society experts the two poles apart that are the reality described by VLOPs in official DSA reports and the world documented by independent researchers and civil society organisations, as well as experienced by young people themselves.

We invited representatives of Meta, TikTok and YouTube to join the discussion and answer questions directly, yet none of them showed up.

Platforms that exploit rather than cultivate that capacity are failing our children – and failing the law. It is deeply concerning that companies whose representatives and lobbyists are frequently present in this house and across Brussels decided not to take part in today’s discussion with the Commission, Members of Parliament and civil society. Closed-door meetings are not enough. We are asking very basic questions – questions that come directly from the platforms’ own reports. Civil society is not demanding confidential business information. We want clear data and measurable indicators. We need to know whether the measures platforms describe on paper are actually working in real life.

MEP Kamila Gasiuk –Pihowicz, EPP

What platforms claim – and what the research shows

Across their DSA risk assessments, platforms insist they are providing “the strongest safeguards” for minors, especially by: reducing addictive design, introducing (in the case of minors even “by default”) prompts such as “go to bed” and time limits, moderating harmful content effectively, preventing algorithmic amplification of illegal, harmful and age-inappropriate content, verifying users’ age.

We would be among the first ones to applaud these policies. But independent research paints a very different picture. During the hearing, researchers and civil society experts focused on three areas where the gap between claims made by VLOPs and reality is most striking.

  1. Addictive design: platforms are still designed to keep children hooked.VLOPs highlight they all have screen‑time management tools and nudges to take a break. Yet research consistently shows that infinite scroll, autoplay, and hyper‑personalised feeds still work exactly as intended – to maximise time spent on the platform. Their design choices are optimised for attention extraction rather then well-being, with young people describing their experiences as being “stuck” in endless sessions.

    According to to recent survey of Reset Tech in partnership with YouGov, 93% of European youngsters experiences that feeling at least once a week and 63.5% every single day. One in five of these teens reported losing sleep every night.

    This is not a bug. It is their business model.

  2. Harmful content: algorithms still recommend what hurts.Platforms claim their measures (such as parental controls, minimum age requirements, signals for estimating the age of users, etc.) to limit access of minors to age-inappropriate content are effective – even if not perfect, they “result in a significantly lower residual risk profile”, to quote from YouTube Risk Assessment. Yet, according to experiments by Amnesty International, Reset Tech, and others: recommender systems actively push harmful material, minors are still exposed to pro‑eating‑disorder, self‑harm, and depressive content.

    In recent study conveyed in Poland 39% of 7th grade students stated that they were receiving suicide-related content on social media while not seeking for it. Another study suggests “minors face disaproprionately higher levels of harmful videos , spanning 7-15% versus 4-8% for adults, that suggests algorithmic systems push harmfull content to children accounts regardless potential harms.

    In addition, the effectiveness of reporting mechanisms is questionable. According to research on content moderation under DSA conducted by Hate Aid, “after exhausting all available legal remedies, only 57% of reported illegal content was removed during the project period”.

  3. Age verification: a leaky barrier that shifts responsibility from platforms to their users.Platforms proudly present age‑control tools as a solution. In reality, their gates are wide open to minors. Over 58% Polish kids between 7-12 years old has a social media account. Parental control tools also do not solve the problem – they just shift the responsibility from platforms onto children and their parents as if the harms experienced by children could be prevented with more (self) discipline. The problem will persist even if the minimum digital is raised.

    In the hearing, youth activist Leandra Voss stated: “age restrictions shift the blame away from the harmful design of the platforms. Thus they simply delay when this harm is done to people.

A rare consensus: three political groups, one diagnosis

Alignment among MEPs from EPP, S&D, and Renew – three major political groups  – was symptomatic of the urgency of the task that European institutions are facing. The business model of platforms is incompatible with children’s safety. The EU has legal and political tools to demand fundamental changes, not superficial corrections.

Platforms profit from attention. Their incentives are to keep users scrolling, not to protect their wellbeing. This cross‑party consensus matters: it signals that the political centre of gravity in Europe is shifting.

Safety by design isn't a limitation for freedom, it's a legal necessity for a functioning society.

MEP Irena Joveva, Renew

The Commission acknowledges both problem and need for a prompt action

European Commission’s officials responsible for reviewing platforms’ DSA reports and drafting the upcoming Digital Fairness Act participated to the debate and expressed several crucial commitments.

  1. Platforms must ensure that their reports are “comprehensible”.
    Commission officials recognised what civil society has been saying for months: current risk assessments provided by platforms are vague, with unclear methodologies and often claims are not backed by data. Platforms’ claims must be matched by evidences, failing to do so would just make these reports empty statements.
  2. Researchers and CSOs are invited to share their evidence .
    The Commission announced the launch of the DSA Whistleblower Tool, which enables researchers, civil society organisations and independent experts to submit evidence, reporting harmful practices and access data relevant to DSA enforcement.
  3. The Commission is ready to go further than the DSA
    Officials confirmed that the upcoming Digital Fairness Act may introduce: stricter rules on addictive design, obligations to switch off certain manipulative features, ‘user empowerment’ requirements for recommender systems, and specific, higher protections for children as consumers.In the meantime the question of a minimum digital age is disputed by a high-level expert panel convened by President von der Leyen. If the political pressure on the issue around Europe continues – a new Europe wide legislation will follow. The technical part of age verification is not part of the scope of Digital Fairness Act, however age assurance may be introduced as one of the measures for protecting minors in upcoming update of horizontal consumer law (DFA).

Why this hearing mattered

The stakes are too high to rely on self‑reporting by companies whose profits depend on keeping children online. That was the main reason we organised the hearing and invited all stakeholders to participate to the exchange: we wanted platforms to hear our questions, explain how their mitigation measures are meant to work, and share relevant metrics that could inform their work. By chosing not to attend, they showed lack of interest in public consultations with civil society and decision-makers alike.

Across participants, we shared concerns that, two years into the DSA enforcement, platforms’ declarations of intent are not enough in addressing harm, and that risks assessment should be carried out transparently. It was agreed that mitigating online risks for children requires confronting VLOPs business model, which is at the heart of the problem.

Adoption of the DSA in 2022 was a crucial first step. Uncompromising enforcement is what needs to happen now. As CSOs, we will continue to support the Commission by providing independent evidence. We will also make sure that the upcoming DFA explicitly prohibits pervasive dark patterns (still) used by VLOPs and (at least) the most obvious forms of addictive design.

Contribution by: EDRi member, Panoptykon Foundation