ENDitorial: Online child protection should not come hand-in-hand with censorship

By EDRi · May 23, 2012

This article is also available in:
Deutsch: [ENDitorial: Schutz von Kindern im Internet sollte nicht mit Zensur einhergehen | https://www.unwatched.org/EDRigram_10.10_ENDitorial_Schutz_von_Kindern_im_Internet_sollte_nicht_mit_Zensur_einhergehen?pk_campaign=edri&pk_kwd=20120523]

Earlier this month, the UK Prime Minister David Cameron announced that
his government would be considering default filtering of ‘adult content’
on the internet. The announcement came shortly after the release of a
report by a cross-party parliamentary inquiry chaired by Claire Perry on
online child protection. The report’s chief recommendation is the
adoption of an opt-in filter for adult material on the internet as the
best way to protect children online. Regulation of online content is
also suggested as a way forward.

Let us be clear. There is no doubt that the protection of children from
‘harmful’ content is a perfectly legitimate goal. In fact, the European
Court of Human Rights has said that states have an obligation to protect
children online (K.U. v Finland, 2 December 2008). The problem with the
inquiry proposals however is that they are both hopelessly vague and
would severely damage freedom of expression online.

To begin with, freedom of expression is as much about ideas and
information we agree with or care about as it is about speech we
consider offensive or are not interested in. Like it or not, pornography
is protected speech both under Article 19 of the International Covenant
on Civil and Political Rights (ICCPR) and Article 10 of the European
Convention on Human Rights (ECHR). The same is true of the vast majority
of ‘adult content’ online or offline with the exception of child
pornography, which is one of the very limited types of expression that
can be prohibited under international law. For this reason, any limit on
freedom of expression can only be justified if it is narrowly defined
and proportionate to achieve its purpose.

A major issue with the current proposals is that it is unclear what
would be filtered or blocked. In the absence of an agreed definition of
pornography, works of art like Nabokov’s Lolita would not necessarily
pass muster. It gets worse with ‘adult material’, which potentially
encompasses a much wider range of content. The point is that these terms
typically entail subjective value-judgments, which makes it almost
impossible to anticipate what would or should be filtered. Of course,
there are already various kinds of regulation of adult material, e.g.
broadcasting standards that restrict the screening of adult material on
terrestrial television. But the internet is not like television. What is
an appropriate model of content regulation for broadcasting has been
emphatically rejected as inappropriate for the internet, which is more
like print media.

Another concern is that filters would not only target ‘pornographic’ and
‘adult’ material but also any content deemed ‘harmful’, ‘inappropriate’
or ‘unacceptable’, terms which are even less amenable to clear and
objective definition. For example, would a music video of Lady Gaga on
YouTube be filtered? Under current proposals, this may well be the case.
In Indonesia, the singer was recently denied entry in the country to
perform a concert on the ground that her outfits and dance moves would
‘corrupt’ the country’s youth.

Filtering pornography or adult content by default at network level would
also have a disproportionate impact on freedom of expression. First of
all, it would basically entrench a system whereby the vast majority of
the population would only be given access to what is fit for children by
default – and this in the absence of conclusive evidence that access to
pornography and other ‘inappropriate’ material is even harmful to
children. It might be worth remembering at this point that teenagers
have always found ways to find pornography in the offline world, even
before the internet.

Secondly, as ARTICLE 19 has already pointed out several times, the
decision as to what is acceptable or not – and therefore what may be
filtered or not – should not be left to ISPs who are ill-suited to make
such judgements. In our view, content restrictions on the internet are
plainly unnecessary. If, however, they were to be adopted, they should
at the very least have a clear legal basis and the question whether or
not they are justified should be decided by the courts on a case-by case
basis.

Thirdly, as it is well well-known, web filtering and blocking are far
from perfect with obvious risks of overblocking, i.e. blocking perfectly
legitimate content. The Open Rights Group has already documented how
this already happens with mobile broadband filtering.

Fourthly and equally well-known is the fact that blocking and filtering
are largely ineffective because they can be easily circumvented using
proxies and other techniques. Indeed, several websites on the internet
offer circumvention technologies. A recent example is the blocking of
the Pirate Bay, a file-sharing website. After the UK and Dutch courts
ordered ISPs to block the Pirate Bay, a flurry of other sites mushroomed
offering access to it.

Finally, if default-blocking becomes acceptable for the protection of
children online, there is a real danger that it would be extended to
other types of content, leading to a slippery slope towards massive
online censorship.

Opt-in filtering systems imposed by government or commercial service
providers are clearly a disproportionate restriction on freedom of
expression. Primary responsibility for protecting children online lies
with their parents. If parents want to control what their children do
online, they should install the filtering software of their choice and
control their settings, rather than impose a Victorian, 19th century
view of what is appropriate content on everyone else.

Join the debate – Article 19 (17.05.2012)
http://www.article19.org/join-the-debate.php/36/view/

(Contribution by Gabrielle Guillemin – EDRi-member Article 19 – UK)