Many in the Brussels EU bubble seem to agree that Big Tech companies need to be regulated more strictly through the EU’s new digital flagship law, the Digital Services Act (DSA).
This week, it’s the European Parliament’s Legal Affairs (JURI) Committee’s turn to give it a go but its EPP Group rapporteur Geoffroy Didier has not made many friends in the process.
By pushing for extreme content removal rules that could increase Big Tech’s market dominance and by treating private chats as public social media posts, Didier has managed to alienate many of the committee’s most senior digital policy experts.
The French rapporteur’s proposals are so harmful to fundamental rights and basic principles of fairness that we, here at European Digital Rights (EDRi), have serious concerns as to whether the JURI Opinion should be rejected altogether.
Didier’s proposals do not address the systemic problems of the centralised platform economy, which could reinforce censorship. While online platforms have a role to play in dealing with systemic risks, holistic – not techno-centric – approaches are needed to guarantee the safety and free expression of everyone.
“By pushing for extreme content removal rules that could increase Big Tech’s market dominance and by treating private chats as public social media posts, [EPP Group rapporteur Geoffroy] Didier has managed to alienate many of the [JURI] committee’s most senior digital policy experts”
During a recent EDRi led workshop on the impact of biometric mass surveillance on Roma and Sinti people, we were reminded of the harsh reality of online harms against these groups, when the chat was asked to be moderated to prevent racist abuse against participants.
Online hate speech affects marginalised groups’ ability to participate in the public sphere at a higher and more sinister level. Research conducted by Amnesty International has shown that women of colour, women with disabilities, queer, and trans women are a primary target for gender-based violence online.
This often leads to self-censorship and/or offline violence. EDRi member ARTICLE 19 also reminds us that freedom of expression cannot exist, without all people being heard, and how “ensuring that no one is censored on the basis of who they are” is critical.
Effective anti-hate speech policies must recognise that often online abuse reflects systemic harms in society. At the same time platforms react arbitrarily and often inappropriately to different types of abuse and users.
In addition, courts continue failing to provide justice to victims, and alternative content-reviewing bodies risk neither being independent nor having expertise.
When the solution to illegal or harmful content online is centred on removal, already marginalised groups are the most likely to be affected. The recent backlash against sex workers after the sudden prohibition of sexual content on the platform OnlyFans is a good example.
Similarly, black women and women advocating for body acceptance often have their content taken down. This is why the DSA must not leave decisions over content legality, and more generally over what is reprehensible or not online, at the mercy of platforms’ arbitrariness.
“The DSA’s proposals on risk management for ‘Very Large Online Platforms’ are likely to reinforce the dominance of Big Tech, including through the uptake of AI. Thematic ‘codes of conduct’ (such as the Code of Conduct on Hate Speech) avoid democratic scrutiny and lean heavily on ‘trusted flaggers’, which may be removed from grassroots concerns or even dependent on EU or state influence”
The DSA’s proposals on risk management for ‘Very Large Online Platforms’ are likely to reinforce the dominance of Big Tech, including through the uptake of AI. Thematic ‘codes of conduct’ (such as the Code of Conduct on Hate Speech) avoid democratic scrutiny and lean heavily on ‘trusted flaggers’, which may be removed from grassroots concerns or even dependent on EU or state influence.
Instead, independent human rights impact assessment involving representatives of affected groups and civil society organisations should inform any proactive measure, policy or legislative changes.
Models of restorative justice applied to content moderation are promising: they take into account the need of the person who was harmed, the person who did harm and the community, rather than a punitive approach only.
Instead of solely pushing for more and faster removal of online content, legislation around content moderation on platforms should focus on addressing the root causes of toxic content amplification: the data and advertising-driven business model of Big Tech.
The monetisation of noxious content leads to boosting hatred, polarisation and even content that reinforces mental health problems.
EDRi and its partners call for alternatives to the commercial tracking and surveillance ecosystem that fuels this kind of content amplification and targeting. EDRi member Panoptykon has developed proposals where platform’s problematic recommender systems are not based on data inferred from pervasive tracking but instead empower users to decide what kind of online content they want to see, how and by whom.
European Commission President Ursula Von der Leyen called digital policy a “make-or-break” issue in her State of the Union speech, a message that Big Tech was quick to embrace, eager to focus on innovation and technological uptake.
But what’s really ‘broken’ are heavily lobbied EU institutions that put profit before people. EDRi and its allies will continue to promote holistic solutions that foster a just, fair and safe internet for all to counter this.
The article was first published by The Parliament Magazine here.
( Contribution by: )