Why is online child abuse so unimportant that, politically, it does not need laws? Why is online child abuse so unimportant that the policies that are proposed to address this problem are never subject to review to test their effectiveness? Why is online child abuse protection so unimportant that policies that are implemented are never subject to any review?
Sweden and the United Kingdom both introduced web blocking in the middle of the last decade. In both cases, this was as a result of political and media pressure and, in neither case, on the basis of any particular evidence. Having been set up without any evidence of usefulness, they have never been subjected to any analysis to find out if the are useful or, worse still, if they might be actually causing problems.
The issue of whether the blocking systems could be worse than useless is a serious one. Blocking lists have “leaked” into the public on more than one occasion. Only last week, a hacker was able to gain access – and publish – the list of web pages blocked on an ad hoc basis in Germany. And, of course, if one, well-intentioned, hacker could do it and publish the fact that this had been done, it is entirely possible that one or ten or twenty ill-intentioned hackers have been doing this every day of every week since this blocking system was introduced.
The stubborn refusal of “child protection” authorities to submit any of their policies to any form of democratic control or any sort of assessment of usefulness is probably unique in the policy-making world. The very real risk of the list being hacked and becoming a tool for obtaining access to illegal material was obvious. So, what was the corresponding benefit? Nobody ever asks the question. If it is to prevent deliberate access, where is the evidence to suggest that feeble blocking systems achieve this goal? If it is to stop accidental access, where is the evidence to suggest that this happens in real life?
So, we come back again to the question. Why is child protection online so utterly unimportant that policy can be developed where the goals are unquantified and, frequently, unknown and where the risks are very real and verifiable?
Instead of real research, we get blatant nonsense. The UK’s blocking industry leader, the Internet Watch Foundation (IWF), with income of nearly one and a half million pounds last year is not shy in generating clever headlines. In March 2013, it published a press release saying that 1.5 million adults in the UK had “stumbled upon” child abuse material. This statistic is truly shocking.
Shocking… except… the figure was based on an opinion poll that produced results that showed that 4% of men and 2% of women THOUGHT they had possibly stumbled upon child pornography and the IWF chose to ignore the fact that 75% of people that contact the IWF to report “illegal” content are, in fact, mistaken. So, the correct figures can reasonably be assumed to be 1% of men and 0.5% of women. So, we discover that 1% of men and 0.5% of women, to an accuracy of plus or minus 3% accidentally accessed child pornography – which tells us precisely nothing. We should hope that the IWF honestly believed that their “statisics” were meaningful and not grossly manipulative and misleading.
The blocking “voluntarily” introduced in the UK for dubious child protection reasons has now devolved into a blocking free-for-all where everything from a conservative blog to a Porsche brokerage to a feminist blog, while the Swedish private and for-profit company Netclean recently hit the jackpot with a contract for 40 million Euro to provide blocking and filtering technology to Turkey, which has been repeatedly condemned before the European Court of Human Rights for illegal blocking. Netclean’s software comes “pre-configured” with the IWF blocking list but, usefully, can use ”multiple lists”.
New study reveals child sexual abuse content as top online concern and potentially 1.5m adults have stumbled upon it (18.03.2013)
Internet Watch Foundation annual & charity report 2013
(Contribution by Joe McNamee, EDRi)