Blogs

Algorithms – censorship à la carte?

By EDRi · July 12, 2016

On 17 June, the Counter Extremism Project (CEP) presented software designed to stop the proliferation of “extremist“ video and audio online. CEP is a non-profit organisation that states as its mission “combating extremist groups”. Of course, this algorithm alone can do nothing: To be operational, it needs a database of already identified “extremist” content. Humans have to define what “extremism” looks and sounds like – and they do not always agree on the definition. Therefore, human mistakes and bias become computer mistakes and bias.

................................................................. Support our work - make a recurrent donation! https://edri.org/supporters/ .................................................................

In principle, CEP’s algorithm is not groundbreaking: It is based on the PhotoDNA software, a widely-used tool to detect previously identifed child abuse online, developed by Microsoft. That was rolled out with the reassurance that it ONLY would be used to deal with universally illegal child abuse material and only in relation to Interpol’s “worst of the worst” list of images.

Large companies like Facebook and Microsoft use PhotoDNA to check uploads to their services, even private content of their “cloud” servers. PhotoDNA computes an individual signature of the media uploaded. This so-called hash is resistant to alterations in the image. If there is a match, the content can be flagged and removed. Microsoft makes user data associated with attempted uploads of such material available to law enforcement agencies. Nobody has ever seen fit to do a review to test if there is a real benefit from using the technology nor to ensure that its use is not counterproductive in some way.

This approach lends itself to content that is always illegal. Now that Somalia has ratified the Child Rights Convention, the USA is the only country in the world not to have ratified that instrument, and even the USA has ratified the Optional Protocol on Child Pornography. Legislators and courts have clearly defined what falls into this category and it cannot be legitimately quoted or re-used.. However, the definition of “extremist content” is everything but clear; CEP’s algorithm does not (and logically cannot) contain this definition either. Even if it were to use a database of previously identified material, that still would create problems for legitimate quotation, research and illustration purposes, as well as problems regarding varying laws from one jurisdiction to another.

Looking at the EU, the presentation of the algorithm comes at a politically opportune time: Together with Internet companies, the EU Commission is currently setting up a “Joint Referral Platform”. This de facto revival of the “Clean IT” <https://edri.org/rip-cleanit/> project aims to prevent the unnoticed re-upload of previously removed material through mandatory monitoring of every single file that every individual in Europe uploads to the Internet. According to the German Federal Government, the new platform will also rely on content recognition by robust hashing, as provided by the CEP.

The EU’s Joint Referral Platform has the potential build upon the arbitrary efforts of the European Police Office’s (Europol’s) “Internet Referral Unit” (IRU). This new Europol department actively checks platforms like Facebook and Twitter for content that is not illegal but potentially “incompatible” with those companies’ terms of service. It sends referrals to them so that they can “voluntarily consider” what to do with the content that has been objected to by a police agency. The Joint Referral Platform has the potential to automate Europol’s not-formal-censorship activities by an automatic detection of re-upload. However, it remains unclear whether any investigative measures will be taken apart from the referral – particularly as Europol’s activities, bizarrely, do not deal with illegal material. There is obviously no redress available for incorrectly identified and deleted content, as it is not the law but broad and unpredictable terms of service that are being used.

As long as the answers to these questions are missing, we leave it to the few “content moderators“ of social media platforms to enact their terms of service. As a professor of the history of technology Melvin Kranzberg rightly noted: “Technology is neither good nor bad; nor is it neutral.”

Is it really proportionate to scan/filter every single upload from every single European, to make sure it is legal? If Europe takes the lead in mass surveillance and filtering of their citizens’ uploads to the internet, what hope for the open and democratic internet elsewhere in the world?

Counter Extremism Project Unveils Technology to Combat Online Extremism (17.06.2016)
http://www.counterextremism.com/press/counter-extremism-project-unveils-technology-combat-online-extremism

There’s a new tool to take down terrorism images online. But social-media companies are wary of it. (21.06.2016)
https://www.washingtonpost.com/world/national-security/new-tool-to-take-down-terrorism-images-online-spurs-debate-on-what-constitutes-extremist-content/2016/06/20/0ca4f73a-3492-11e6-8758-d58e76e11b12_story.html

Europol: Non-transparent cooperation with IT companies (18.06.2016)
https://edri.org/europol-non-transparent-cooperation-with-it-companies/

(Contribution by Fabian Warislohner, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner