By Guest author

The new danger is no longer yellow, but red once more: fake news. It helped getting Trump elected. It paved the highway to Brexit. Even local European elections are not safe. The greatest danger to our democracy in modern times must be fought by all possible means.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

Fortunately, we have the European Commission. Laws are gradually replaced by terms of use, and courts of law by advertising agencies. The latter are tipped off by Europol and media companies when users break their rules. Trials are no longer necessary. Progress is only measured by how much and how quickly online content gets deleted. The Commission keeps up the pace with a rapid-fire of communications, covenants, press releases and directive proposals, and sees that all is well.
Unfortunately, the previous paragraph is not an attempt at satire. The only incorrect part is that the fake news hype is not the cause of this evolution. It does, however, fit in seamlessly.

Fake news avant la lettre

Fake news is old news. In his book “The Attention Merchants”, Tim Wu links its emergence to the rise of the tabloids in the 1830s. While most news papers cost six cent, the New York Sun only cost one cent. It grew quickly in popularity thanks to publishing an abundance of gruelling details on court cases, and was mainly financed by ads for “patent medicine” – commercial medicine based on quackery.

The rise of radio tells a similar tale. RCA, the maker of the first radios, launched NBC so its clients could listen to something with their new device. CBS, which started broadcasting at a later date, nevertheless quickly grew much bigger thanks to easy listening programming coupled to an expansive franchise model that enabled local stations to share in the ad revenue. Television reruns the same story, with Fox News that manages to reach a broad audience with little previous exposure to TV ads.

Tall stories, half truths, and sensational headlines are tried and tested methods used by media companies to sell more ads. On the internet every click on “Five Tips You Won’t Believe” banners also earns money for the ad agencies. However, so do visits to “Hillary’s Secret Concentration Camps”. In this sense the distribution of fake news through Facebook and Google always has been a natural part of their business model.

The doctor is expensive

Disinformation about a person is handled by defamation law. For specific historical events, like the holocaust, most European countries have laws that make its denial a criminal offence. Spreading certain other kinds of wrong information, however, is not illegal, like claiming that the Earth is flat.
Laws in this field are always contentious, given the tension with the right to free speech and the freedom of the press. Deciding what takes precedence is rarely obvious, and so normally a judge has the final word as to whether censorship is appropriate.

However, the courts are overloaded, money to expand them is lacking, and the amount of rubbish on the internet is gargantuan. Therefore legislators are eagerly looking at alternatives.

Let’s try self-medication

In recent years, the approach at the European level to relieve the courts has been one of administrative measures and self-regulation.

In 2011, article 25 of the Directive on Combating Sexual Exploitation of Children introduced voluntary website blocking lists at the European level. The goal was to make websites related to sexual child abuse unreachable in case closing them down or arresting the criminals behind them turned out unfeasible.

The 2010 revision of the Directive on Audiovisual Media Services (AVMS), originally intended for TV broadcasters and media companies, was broadened to also partially include services like YouTube. It requires sites that enable video sharing, and only those sites, to take measures against a.o. hate speech. A procedure to broaden this required policing is ongoing.

This fight was intensified by means of a Code of Conduct on Online Hate Speech. The European Commission agreed on it in 2016 with Facebook, Microsoft, Twitter and YouTube. These companies have accepted to take the lead in combatting this kind of unwanted behaviour.

The Europol regulation, also from May 2016, complements this code of conduct. It formalised Europol’s “Internet Referral Unit” (IRU) in article 4(m). Europol itself cannot take enforcement actions. As such, the IRU is limited to reporting unwanted content to the online platforms themselves “for their voluntary consideration of the compatibility of the referred internet content with their own terms and conditions.” The reported content need not be illegal.

The European Commission’s communication on Tackling Illegal Online Content from 2017 subsequently focussed on how online platforms can remove reported content as quickly as possible. Its core proposal consists of creating lists of “trusted flaggers”. Their reports on certain topics should be assumed valid, and hence platforms can check them less thoroughly before removing flagged content.

The new, and for now voted out, Copyright Directive made video sharing sites themselves liable for copyright infringements by their users, both directly and indirectly. This would force them to install upload filters. Negotiations between the institutions on this topic will resume in September.

Concerning fake news, the European Commission’s working document Tackling Online Disinformation: a European Approach from 2017 contains an extensive section on self-regulation. In January 2018, the Commission created a “High Level Working Group on Fake News and Online Disinformation”, composed of various external parties. Their final report proposes that a coalition of journalists, ad agencies, fact checkers, and so on, be formed to write a voluntary code of conduct. Finally, the Report on the Public Consultation from April 2018 also mentions a clear preference by the respondents for self-regulation.

Follow-up of the symptoms

At the end of 2016 (a year late), the European Commission published its first evaluation of the directive against the sexual exploitation of children. It includes a separate report on the blocking lists, but it does not contain any data on their effectiveness nor side effects. This prompted a damning resolution by the European Parliament in which it “deplores” the complete lack of statistical information regarding blocking, removing websites, or problems experienced by law enforcement due to erased or removed information. It asked the European Commission to do its homework better in the future.

However, for the Commission this appears to be business as usual. In January 2018, four months after its Communication on Tackling Illegal Online Content, it sent out a press release that called for “more efforts and quicker progress” without an evaluation of what had been done already. The original document moreover did not contain any concrete goals nor evaluation metrics, so it begs the question more and quicker than what these efforts should be exactly. This was followed up in March 2018 by a recommendation by that same European Commission in which everyone, except for the Commission and the Member States themselves, were called upon to further increase their efforts. The Commission now wants to launch a Directive on this topic in September 2018, undoubtedly with requirements for everyone but themselves to do even more and more quickly.

Referral to the pathogen

Online platforms have the right, within the boundaries of the law, to implement and enforce terms of use. What is happening now, however, goes quite a bit further.

More and more decisions on what is illegal are systematically outsourced to online platforms. Next, covenants between government bodies and these platforms include the removal of non-illegal content. The public resources and the authority of Europol are used to detect such content and report it. Finally, the platforms are encouraged to perform fewer fact check on reports from certain groups, and there are attempts to make the platforms themselves liable for their users’ behaviour. This would only make them more inclined to pre-emptively erase controversial content.

When a governmental body instates measures, these are always put to the test of the European Charter. The proportionality, effectiveness and subsidiarity need to be respected in the light of fundamental rights such as the right to free speech, no arbitrary application of the law, and the right to a fair trial. Not prosecuting certain categories of unwanted behaviour or not even making them illegal, and instead “recommending” online platforms to undertake action against them, undercuts these fundamentals of our rule of law in a rather blunt way.

Moreover, these online platforms are not just random companies. As the founders of Google wrote in their original academic article on the search engine:

For example, in our prototype search engine one of the top results for cellular phone is ‘The Effect of Cellular Phone Use Upon Driver Attention’… It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media, we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.

Meanwhile, Google has become one of the largest advertising agencies in the world. Facebook and Twitter also obtain a majority of their revenue from selling ads. If the Commission were to ask whether we shouldn’t outsource the enforcement of our fundamental right to ad agencies, the response would be presumably quite different than to the umpteenth hollow announcement about how internet platforms should address illegal online content more quickly and more thoroughly.

A difficult diagnosis

If we look specifically at fake news, there are two additional problems: no definition and few studies about it in its current form. Even the public consultation on fake news by the European Commission caused confusion by giving hate speech, terrorism and child abuse as examples of “already illegal fake news”. However, none of these terms refer to concepts that are necessarily fake or news. This makes it hard to draw conclusions from the answers, because it is unknown what the respondents understood under the term. The Eurobarometer survey on this topic has similar problems.

This does not mean that no information exists. The German NGO Stiftung Neue Verantwortung performed an experiment by spreading fake news through bought Twitter-followers. They drew some interesting conclusions:

Fake news is like memes: their essence is not their existence, but how well they are picked up. This means that blocking the source of popular fake news will not stop it from spreading;
Fake news is over by strongly linked users, while debunking it happens by a much more varied group of people. Hence, people with similar opinions seem to share the same fake news, but it seems to influence the general public less strongly.

An analysis by EDRi member Panoptykon on artificially increasing the popularity of Twitter messages on Polish politics led to compatible conclusions. There are bubbles of people that interact very little with each other. Each bubble contains influencers that talk about the same topics, but they seldom talk directly to each other. Prominent figures and established organisations, rather than robots (fake accounts), steer the discussions. Robots can be used to amplify a message, but by themselves do not create or change trends.

It is virtually impossible to distinguish robots from professional accounts by only looking at their network and activity. Therefore it is very hard to automatically identify such accounts for the purpose of studying or blocking them. These are only small-scale studies and one has to be careful to draw general conclusions from them. They certainly do not claim that fake news has no influence, or that we should just ignore it. That said, they do contain more concrete information than all pages on this topic published to date by the European Commission.

So what should we do? Miracle cures are in short supply, but there are a few interesting examples from the past. The year 1905 saw a revolution against patent medicine after investigative journalists exposed its dangers. Later, in the 1950s, tv quizes were found out to favour telegenic candidates due to their beneficial effect on ratings. Income steering content has been around since forever. Independent media should therefore be an important part of the solution.

The cure and the disease

The spectacular failure of the political establishment both in the US and the UK could impossibly have been their own undoing, so a different explanation was called for. Forget about the ruckus back in the day about Obama’s birth certificate or his alleged secret muslim agenda, David Cameron’s desperate ploy to cling to power, and the tradition of tabloids and their made-up stories in the UK. This is something completely different. Flavour everything with the dangers of online content and present yourself as the digital knight on the white horse that will set things straight. Or rather, that orders the ones making money from sensation and clickbait (such as fake news) to set things straight as they see fit.

The above is oversimplified, but it is incredible how this European Commission is casually promoting the Facebooks and Googles of this world to become the keepers of European fundamental rights. Protecting democracy and the rule of law is not a business model. It is a calling. One that few will attribute to Mark Zuckerberg.

This article originally appeared in Dutch on Apache.be.
Original Article: De nepstrijd tegen het nepnieuws

 

Read more:

Press Release: “Fake news” strategy needs to be based on real evidence, not assumption (26.04.2018)
https://edri.org/press-release-fake-news-strategy-needs-based-real-evidence-not-assumption/

ENDitorial: Fake news about fake news being news (08.02.2017)
https://edri.org/enditorial-fake-news-about-fake-news-being-news/

(Contribution by Jonas Maebe, EDRi observer)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner