By Joe McNamee

In the autumn 2015, the Committee on Civil Liberties, Justice and Home Affairs of the European Parliament (LIBE) will resume its discussions of a draft resolution on “radicalisation”, led by Rachida Dati, a French conservative member. Her draft includes several bizarre statements, but one on Internet “giants” stands out as being particularly extreme.

The proposal includes an entirely superfluous call for “Internet giants” (but not everyone else?) to be “made aware” of their responsibilities to delete illegal content. These obligations exist since 2000, and will therefore hardly be news to any internet company, and certainly not the ones with the best funded legal departments.

Then, however, the text becomes somewhat more sinister. It calls on EU Member States to consider criminal sanctions against undefined “digital actors” who do not take unspecified “action” “in response to the spread of illicit messages that praise terrorism on their internet platforms”. The proposal then goes on to suggest that an inadequate response from the the actor “should be considered an act of complicity with praising terrorism and should consequently be punished”. This would create an overwhelming pressure on any company, organisation or individual whose online presence could be considered to be a “platform” – particularly smaller ones that could not afford any litigation – to delete any content that risked subsequently being considered illegal.

The first question that needs to be asked is why? What is the experience in Europe that suggests that Internet platforms are leaving illegal terrorist material online? What is the experience that is so severe that criminal sanctions are necessary? What is the experience that shows that, in any European country, the existing sanctions are not adequate? In a democratic society, is it appropriate to use coercive measures to persuade private companies to delete content in the complete absence of any counterbalancing obligations to leave legitimate (even if unwelcome) speech online?

Dati’s suggestion would bring Europe very closely into line with the Chinese law on “Measures on the Administration of Internet Information Services” that was adopted in 2000.

As bad as this is, it actually gets worse. When Members of the Parliament (MEPs) were drafting their amendments, they relied on translations that drifted away from the original meaning. For example, the English translation would make Internet companies liable for “illegal messages OR messages praising terrorism” i.e. the platforms would become criminally liable for failing to take action against messages that were not, in fact, illegal.

Worse still, rather than objecting to this notion, the Parliamentarian representing the Socialists and Democrats group, the “shadow rapporteur” Ana Gomes, suggested that law enforcement authorities should have the quasi-judicial role of telling Internet companies what they should delete and, in addition, that they should become criminal liability for failing to do everything “to the best of their human and technical capability” not just to delete illegal content, but to identify it as well. Instead of the rule of law and a Charter of Fundamental Rights that requires restrictions on our human right to be necessary, proportionate and effective, we would have the police as judges and automatic software finding and automatically deleting anything that would create a legal risk for Internet companies.

This would make European law somewhat more restrictive than those of China’s Administrative Measures on Internet Information Services (2000), which do no require proactive searching for potentially illegal content:

Article 15. Internet information service providers shall not produce, reproduce, distribute or disseminate information that includes the following contents:

(1) content that is against the basic principles determined by the Constitution;
(2) content that impairs national security, divulges State secrets, subverts State sovereignty or jeopardizes national unity;
(3) content that damages the reputation and interests of the State;
(4) content that incites ethnic hostility and ethnic discrimination or jeopardizes unity among ethnic groups;
(5) content that damages State religious policies or that advocates sects or feudal superstitions;
(6) content that disseminates rumors, disturbs the social order or damages social stability;
(7) content that disseminates obscenity, pornography, gambling, violence, homicide and terror, or incites crime;
(8) content that insults or slanders others or that infringes their legal rights and interests; and
(9) other content prohibited by laws or administrative regulations.
Article 16. If an Internet information service provider discovers that information transmitted by its website clearly falls within the contents listed in Article 15 hereof, it shall immediately discontinue the transmission of such information, keep relevant records and make a report to relevant State authorities.

LIBE Draft Report on prevention of radicalisation and recruitment of European citizens by terrorist organisations (2015/2063(INI)) (01.06.2015)
http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2fEP%2f%2fNONSGML%2bCOMPARL%2bPE-551.967%2b01%2bDOC%2bPDF%2bV0%2f%2fEN

LIBE Draft Report on prevention of radicalisation and recruitment of European citizens by terrorist organisations (2015/2063(INI)) (in French, 01.06.2015)
http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2fEP%2f%2fNONSGML%2bCOMPARL%2bPE-551.967%2b01%2bDOC%2bPDF%2bV0%2f%2fFR

Amendments on prevention of radicalisation and recruitment of European citizens by terrorist organisations (03.07.2015)
http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2fEP%2f%2fNONSGML%2bCOMPARL%2bPE-560.923%2b01%2bDOC%2bPDF%2bV0%2f%2fEN

China’s Administrative Measures on Internet Information Services (20.09.2000)
http://www.china.org.cn/business/2010-01/20/content_19274704.htm

(Contribution by Joe McNamee, EDRi)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner