Blogs

CEO Coalition – the blind leading the bland

By EDRi · February 2, 2013

After a year of working group meetings, the “CEO Coalition to make the Internet a better place for kids” produces its final documents on 4 February. The outcome of the project is a set of voluntary guidelines divided into five broad headings, ranging from “reporting tools” to “notice and takedown,” It is intended that this will be followed up by a meeting, in about six months, between Commissioner Kroes and the CEOs of the companies responsible. The meeting is designed to put pressure on the CEOs to fully implement the “voluntary” measures.

Many of the “self-regulatory” measures (privacy, removal of content not judged illegal by a court, etc) are obviously of concern with regard to fundamental rights. The fact that the Commission has a binding legal obligation under the 2003 Interinstitutional Agreement not to put fundamental rights at risk via “self-regulation” appears not to be a concern for the Commission.

The Commission also believes that the Charter of Fundamental Rights is not relevant because, even though it is putting pressure on the companies involved, the measures are “voluntary.” This questionable legal wordplay is a long way from the speech made by Commissioner Barroso to mark the ceremony in 2009 where each Commissioner took a solemn oath to respect the Charter. On that occasion he “underlined that Commissioners today have made clear that they will uphold all the principles and values enshrined in the Treaties and the Charter of Fundamental rights.

Background

The starting position of the project was very unpromising. To begin with, no specific problems were identified. “Solutions” to the unknown problems were to be found by industry, in the blind faith that whatever industry suggested would at least partly be an answer to questions that had not been asked.

Overall project results

While there isolated examples of useful developments, the overall outcome of the project is a set of documents that range from the irrelevant to the incoherent and even the dangerous.

The main problem was that, rather than learning from its mistakes, the Commission followed broadly the same approach when launching the “CEO Coalition” as it did with the failed “Safer Social Networking Guidelines”(pdf).

This project was a similar initiative – and a public relations disaster for the Commission. It was launched in early 2009 and, warmly praised by the Commission, did little more than summarise existing legal obligations and practices and dress them up as a proactive “industry self-regulation” initiative. The Commission’s monitoring of industry compliance confirmed, unsurprisingly, that they were adequately implemented.

Subsequently, however, an investigation by a UK television station into on of the participating social networks (Habbo Hotel) came to the conclusion that the guidelines were ineffective. This led to Commission Vice-President Neelie Kroes being subject to a very awkward interview on UK television. From one day to the next, the Commission went from praising the industry to threatening (using previously unheard-of powers) to shut down companies that did not… well… do good stuff to protect kids…

Not alone did the CEO Coalition not learn from these mistakes, it is even worse on several levels, It allowed the leadership of two of the working groups to be taken by companies that had obvious vested interests – Facebook putting itself in charge of the privacy working group and Microsoft (as a major commercial user of copyright takedown notices) putting itself in charge of “notice and takedown”. Microsoft tried energetically to use the forum as a means of rolling out its “PhotoDNA” upload filtering software. It did this despite being completely unable to produce any independent analysis of its effectiveness or even evidence that it is not counterproductive.

Summary of working group results

Working Group on “notice and takedown”

The Working Group on “notice and takedown” was the most controversial. Microsoft tried and failed, despite energetic Commission support, to adopt upload filtering by default as an industry standard in Europe. While being pushed (without any evidence that it would prove effective and proportionate for this task) as a measure that would only ever be used for child abuse material, the same strategy was simultaneously being pushed in the “Clean IT” (Microsoft was not a participant in Clean IT) project to filter uploads in search of ill-defined “terrorist” material.

Once the infrastructure is in place, as we have seen far too often already, it can easily be re-purposed for an ever-wider range of filtering options. It is simply reckless to roll out such technologies without establishing its value for its intended purpose and without taking mitigating measures to prevent misuse and “mission creep”. The final report still retains echoes of that effort, such as in its undertaking to examine “whether more can be done with existing technology,” although overt mentions of upload filtering have been removed.

The final document is quite clear that the few reliable statistics that exist show that industry already removes content very quickly and that, where there are delays, these appear not to be due to problems on industry’s side. However, for the sake of being seen to be doing “something” the group still produced recommendations for industry.

Even though the content in question involves images of serious crime which should always be investigated by law enforcement authorities, the final document suggests that there is a need (which it does not explain) to ban this criminal activity through Internet provider (ISP) terms of service. Where ISPs have “actual knowledge” of illegal material, they are already under a legal obligation to remove it, so it is unclear what this added provision would bring.

The idea appears to be to give the ISP the option to take the place of law enforcement authorities (who can bring criminal charges), even though the ISP can only take superficial measures, like removing content (which needs to be part of the solution, but not the solution. The suggestion is that the terms of service should give the ISP the right “to review and remove content at will and without notice, delete content and accounts, ban participants or terminate access to services in line with the laws applicable to the provider”. It goes on to say that users need to explain “in user-friendly language” the ISPs’ rules regarding criminal child abuse. How many criminals need a “user-friendly” explanation that their crime is a crime? It is far from clear whether these measures would serve any useful purpose – or even that they would not be counterproductive. That, however does not appear to be a concern.

Working group on “reporting tools”

The working group on “reporting tools” also failed to take account either of any problems that had been identified nor, indeed, experience with apps and browser buttons that have already been tried. As a result, many of its “new” proposals are not new, but simply re-packaging of existing ideas. The final text of this group recommends that:

*mobile operators will provide apps to allow reporting of content that “seems harmful” to children
*Industry will work with NGOs/hotlines to provide browser apps to seek help when they find harmful/inappropriate content or behaviour
*Industry will “work to improve” (building from an unspecified basis and solving unspecified problems) how it deals with complaints, “without hindering” law enforcement investigations – even though this refers to “harmful” or “abusive” but not (necessarily?) illegal content,
*Industry will implement a “meaningful” set of categories for reporting tools on user-generated content services.

Despite repeated assertions during the entire process that content that was accused of being illegal would need to be treated differently in this context, this does not appear to be addressed in the final document.

There is also no reference to safeguards. The dangers of over-deletion of content on the basis of reports are well known. There are also dangers of children being bullied by having their content deleted as a result of other kids ganging up to send bogus “abuse” reports. But, in an evidence vacuum, nobody felt the need to address or even acknowledge these problems.

Working group on “Age appropriate privacy settings”

Even though the European Commission’s current strategy for all users is “privacy by default”, this Facebook-led working group moved away from this strategy, including for children. The working group saw its role as establishing “if” a single appropriate level of privacy settings was acceptable. Industry committed, rather blandly, to “make available age appropriate privacy settings” and, even more blandly, to offer “clear and understandable” privacy policies. If anything, this appears to be weaker than the position adopted in the Safer Social Networking Guidelines adopted three years ago.

However, the group did produce one of the very few useful outputs from the CEO Coalition – a database of current practice, from which an evidence-based approach to industry experience and practice could be built.

Working group on “Content classification”

The aim of this group was to create content classification systems wherever “needed” by children or parents. The recommendation on apps is that:

*they will be self-certified, but this process may be “aided” by platforms or “ratings providers”
*consumers should be “easily” able to find out what content is included/excluded by any rating system
*feedback/complaints from consumers should be possible (but there is no recommendation on what to do with such feedback/complaints)

The working group could not come up with any recommendations that fell within their own remit regarding user-generated content. To fill the void, they explained that providers could ban content that is already banned by the law and provide reporting mechanisms for content that may be illegal or may not be illegal but may be harmful. Industry may then age-restrict that content.

On professional content, Coalition members encourage themselves to provide ratings on the basis of existing ratings systems.

A technical taskforce was set up in order to encourage interoperability of ratings models.

Working group on parental controls

The group’s final document notes that numerous tools appeared on the market since the start of the project and gave itself credit for this development. The group, expertly led by Nokia for most of its work, stressed the limitations of technical tools and said that the solution lies in a balance between tools and education of parents and children. Nokia’s insistence on permanent review of experience and evidence was a very welcome exception to the approach followed by the CEO Coalition more generally.

The recommendations of the group are to provide parental controls tools to users/customers by the end of this year, to give clear information on consumers about these tools, to “consider” best practice (thereby avoiding the “on” by default or “off” by default question) and to work with “wider stakeholders” to raise awareness of the controls.

Conclusion

The CEO Coalition did produce some positive outcomes. The networking effect of bringing relevant experts and industry representatives together is an achievement in itself. However, most of the positive outcomes appear to be instances of good fortune happening on the fringes.

The project is a real lost opportunity. If the same human and financial resources had been allocated to addressing clearly identified problems, the outcomes would have been far better. The European Commission owes it to children, to parents, to all citizens and to itself to move away from this public-relations driven evidence-free approach.