Lawless, unproven filtering and blocking of content as “best practice”

By EDRi · October 26, 2012

On Monday of next week (29 October, 2012), the European Union and the United States will have a “summit” (draft agenda) on “Exchange of Best Practices for Child Protection Online”. In the course of that meeting, the question of measures to prevent “re-uploading of the content” will be discussed. Both the European Commission and the United States appear to think that widespread, suspicionless upload filtering is “best practice”.

The United States has had some “voluntary” filtering of images being uploaded to the Internet in place for nearly two years, without any official reservations being expressed – or any statistics being produced. The European Commission, as shown by its enthusiastic support for the measure in discussions surrounding the “CEO Coalition” also appears to be a strong believer in the strategy. This has also been proposed in the context of the “Clean IT” project.

The support from the European Commission of this “best practice” is interesting for a number of reasons. To the best of our knowledge, it has never asked for any independent testing of the technology being used in the United States (Microsoft’s PhotoDNA, which is also being pushed in the EU). When and if the technology is rolled out in Europe, it is intended that the number of images being filtered will be significantly larger. Despite this, there are no plans that we are aware of to test the robustness of the technology to deal with this environment.

Also, as far as we can tell, the Commission has never asked for feedback from the United States regarding the effect of PhotoDNA on investigations. For example, how frequently does it happen in the United States that law enforcement authorities launch an investigation after an image has been blocked on the grounds that it is not alone criminal but one of the “worst of the worst” images ever identified by law enforcement authorities? We are informally led to believe that the figure is less than one in five cases, although this has not been confirmed.

Another very obvious question that has not been answered is the extent to which the software could be abused by criminals. How extensively has the software been tested in order to ensure that it cannot be “hacked” or misused, to the detriment of the very children that it is meant to be protecting? Not at all, as far as we have been able to tell in our discussions with the various parties involved.

What about the re-purposing of the software for other uses? If we imagine that the software works perfectly, if we believe that it cannot be misused to the detriment of the children it is supposed to protect, if we believe that this will not be yet another excuse to let technology deal with the symptoms of a crime rather than law enforcement deal with the real crime… and it is therefore a valid, proportionate tool. What do the Commission and the United States think about “mission creep”? In other words, are they prepared to say that they believe that it would be entirely wrong to push upload filtering for less serious enforcement issues, such as copyright? We don’t know. On a related note, we also don’t know what the Commission or the United States feel about Microsoft using filtering to implement its entirely incomprehensible terms of service in order to block or remove content that is entirely legal – as highlighted in our EDRigram of 4 July.

What detailed analysis of current trends and practices has led the European Commission to focus on this particular approach? According to responses provided to two parliamentary questions (question 7751/2011 and 8898/2012), despite the “Safer Internet Programme” being in operation for about ten years, the Commission has not taken the time to gather statistics at all.

Still, we need not worry, because now the European Commission has outsourced much of the actual policy development to the recently formed “CEO Coalition”, where the working group of labelling of content is led by Silvio Berlusconi’s Mediaset, while the working group on “age appropriate privacy settings” is chaired by…. Facebook! Monday’s session on “messages from industry leadership” is therefore likely to be very informative.

That, in a nutshell is the “best practice” that the EU and US will be discussing at the “summit”. And who could object to “best practice” in relation to child protection online?