Blogs | Information democracy | Freedom of expression online | Platform regulation

Upload filters endanger freedom of expression

There are several examples of how automated upload filters are censoring human rights activists. As it has been proven, some filters used to classify content which is “offensive”, “extremist” or simply “inadequate for minors” have ended up censoring videos which tried to denounce injustices.

By EDRi · May 16, 2018

In September 2016, the European Commission proposed a controversial draft for the new Copyright Directive that includes de facto mandatory automated upload filters for every internet user in the EU. This mechanism, designed to prevent alleged copyright infringements, is leaving the tackling of potentially illegal content uploaded by users to algorithms. However, an idea that was supposed to be an efficient way to safeguard author’s rights, has in reality turned out to be a “censorship machine” that is not even addressing the so-called “value gap” between platforms and rightsholders. Instead, it reveals a “values gap” between European policy-makers and the European values they are meant to uphold.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Giving the obligation of deciding what can be expressed or not online to algorithms, without any human involvement, can pose serious risks to our societies. Particularly sensitive issues deeply linked with the respect of our fundamental right to freedom of expression should be decided by a court, not by a machine. There are already many real life examples about how automated upload filters are failing, censoring a broad range of content from innocent videos to human rights activism.

Kittens purring infringes copyright: YouTube’s content ID system that filters the uploads by its users thought that a cat purring was a copyright infringement. The purring was identified as a musical composition already owned by a company, making the purring a ”pirate” product. This perfectly illustrates the randomness of the content spotted by the filter.

Content used for educational purposes: Harvard’s Professor Lawrence Lessig had one of his lectures removed by the platform’s copyright filter because he was using parts of several well-known songs. Even though the music was part of the didactic material used in his lecture and was therefore legal to be employed for educational purposes, YouTube proceeded to mute his entire lecture. This is a very representative example of how filters can restrict access to culture and education without taking into account the exceptional use of protected content.

Human rights activism censored: There are several examples of how automated upload filters are censoring human rights activists. As it has been proven, some filters used to classify content which is “offensive”, “extremist” or simply “inadequate for minors” have ended up censoring videos which tried to denounce injustices. For instance, thousands of videos reporting atrocities in the Syrian war were removed. This resulted in a loss of extremely valuable material to prosecute war crimes. Other examples of this censorship is the removal of videos of LGTB activists.

The examples mentioned above show that automated upload filters can lead to illegitimate removal of material from the internet. In addition, they can encourage internet users to self-censor – and limit their uploads “voluntarily”, in fear of being censored. These practices deeply affect human rights such as freedom of expression and access to information, culture and education. If it is already complex understanding the status of freedoms to use cultural works in the EU by copyright experts, algorithms are even less likely to understand the context and the purpose of using protected copyright material, nor being the ones deciding whether a content is “offensive” or not suitable for the audience. European policy-makers must take into account this reality and seriously reconsider the use of filters and their impact on democratic societies.

Censorship Machine: busting the myths (13.12.2017)
https://edri.org/censorship-machine-busting-myths/

When filters fail: These cases show we can’t trust algorithms to clean up the internet (28.09.2017)
https://juliareda.eu/2017/09/when-filters-fail/

YouTube’s Content ID (C)ensorship Problem Illustrated (02.03.2010)
https://www.eff.org/deeplinks/2010/03/youtubes-content-id-c-ensorship-problem

(Contribution by María Rosón, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner