COVID-Tech: COVID infodemic and the lure of censorship
In EDRi's series on COVID-19, COVIDTech, we will explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network's statement on the virus.
In EDRi’s series on COVID-19, COVIDTech, we will explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network’s statement on the virus. Each post in this series will tackle a specific issue at the intersection of digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis. In our statement, we emphasised the principle that states must “defend freedom of expression and information”. In this second post of the series, we take a look at the impact on freedom of expression and information that the measures to fight the spread of misinformation could have. Automated tools, content-analysing algorithms, state-sponsored content moderation, all have become normal under COVID-19, and it is a threat to many of our essential fundamental rights.
We already knew that social media companies perform pretty badly when it comes to moderate content on their platforms. Regardless of the measures they deploy (whether using automated processes or employing human moderators), they make discriminatory and arbitrary decisions. They fail to understand context and cultural and linguistic nuances. Lastly, they provide no proper effective access to remedies.
In times of a global health crisis where accessing vital health information, keeping social contact and building solidarity networks are so important, online communications, including social media and other content hosting services, have become even more essential tools. Unfortunately, they are also vectors of disinformation and misinformation that erupt in such exceptional situations and threaten public safety and governmental responses. However, private companies – whether voluntarily or pressured by governments – should not impose over-strict, vague, or unpredictable restrictions on people’s conversations about important topics.
Automated tools don’t work: what a surprise!
As the COVID-19 crisis broke out, emergency health guidelines forced big social media companies to send their content moderators home. Facebook and the like promised to live up to expectations by basing daily content moderation on their so-called artificial intelligence. It only took a few hours to observe glitches in the system.
Their “anti-spam” system was striking down quality COVID-19 content from trustworthy sources as violations of the platforms’ community guidelines. Sharing newspaper articles, links to official governmental websites or simply mentioning the term “coronavirus” in a post would result in having your content preemptively blocked.
This whole trend perfectly demonstrates why relying on automated processes can only be detrimental to freedom of expression and to freedom of receiving and imparting information. The current context led even the Alan Turing Institute to suggest that content moderators should be considered “key workers” in the context of the COVID-19 pandemic.
Content filters show high margins of error and are prone to over-censoring. Yet the European Parliament adopted a resolution on the EU’s response to the pandemic which calls on social network companies to proactively monitor and “stop disinformation and hate speech”. In the meantime, the European Commission continues its “voluntary approach” with the social media platforms and contemplates the possibility to propose soon a regulation.
Criminalising misinformation: a step too far
In order to swiftly respond to the spreading of COVID-19 health crisis, some Member States desperately
try to control the flow of the information about the spread of the virus. In their efforts, they are seduced by the adoption of hasty legislation that criminalises disinformation and misinformation which may ultimately lead to state sponsored censorship and suppression of public discourse. For instance, Romania granted new powers to its National Authority for Administration and Regulation in Communications to order take-down notices for websites containing “fake news”. A draft legislation in its neighbour Bulgaria originally included the criminalisation of the spread of “internet misinformation” with fines of up to 1,000 euros and even
imprisonment of up to three years. In Hungary, new emergency measures include the prosecution and potential imprisonment of those who spread “false” information.
The risks of abuse of such measures and unjustified interference with the right to freedom of expression directly impair the media’s ability to provide objective and critical information to the public, which is
crucial for individuals’ well-being in times of national health crisis. While extraordinary situations definitely require extraordinary measures, they have to remain proportional, necessary and legitimate.
Both the EU and Member States must refrain from undue interference and censorship and instead focus on measures that promote medialiteracy and protect and support diverse media both online and offline.
None of the approaches taken so far show a comprehensive understanding of the mechanisms that enable the
creation, amplification and dissemination of disinformation as a result of curation algorithms and online advertising models. It is extremely risky for a democratic society to rely only on very few communications channels, owned by private actors of which the business model feeds itself from sensationalism and shock.
The emergency measures that are being adopted in the fight against COVID-19 health crisis will determine how European democracies will look like in its aftermath. The upcoming Digital Services Act (DSA) is a great opportunity for the EU to address the monopolisation of our online communication space. Further action should be done specifically in relation to the micro-targeting practices of the online advertising industry (Ad Tech). This crisis also showed to us that the DSA needs to create meaningful transparency obligations for better understanding of the use of automation and for future research –starting with transparency reports that include information about content blocking and removal.
What we need for a healthy public debate online are not gatekeepers entitled by governments to restrict content as in non-transparent and arbitrary manner. Instead, we need diversified, community-led and user-empowering initiatives, that allow everyone to contribute and participate.
Read more:
Joint report by Access Now, Civil Liberties Union for Europe, European Digital Rights, Informing the “disinformation” debate (18.10.18)
https://edri.org/files/online_disinformation.pdf
Access Now, Fighting misinformation and defending free expression during COVID-19: Recommendations for States (21.04.20) https://www.accessnow.org/cms/assets/uploads/2020/04/fighting-misinformation-and-defending-free-expression-during-covid-19-recommendations-for-states-1.pdf
Digital rights as a security objective: Fighting disinformation (05.12.18)
https://edri.org/digital-rights-as-a-security-objective-fighting-disinformation/
ENDitorial: The fake fight against fake news (25.07.18)
https://edri.org/enditorial-the-fake-fight-against-fake-news/
(Contribution by Chloé Berthélémy, EDRi Policy Advisor)