In the digital era, copyright should be implemented in a way which benefits creators and society. It should support cultural work and facilitate access to knowledge. Copyright should not be used to lock away cultural goods, damaging rather than benefitting access to our cultural heritage. Copyright should be a catalyst of creation and innovation. In the digital environment, citizens face disproportionate enforcement measures from states, arbitrary privatised enforcement measures from companies and a lack of innovative offers, all of which reinforces the impression of a failed and illegitimate legal framework that undermine the relationship between creators and the society they live in. Copyright needs to be fundamentally reformed to be fit for purpose, predictable for creators, flexible and credible.

29 Aug 2018

US companies to implement better privacy for website browsing

By Article 19

Important changes are underway for web users, as browser manufacturers are set to put domain name system (DNS) look ups in the hands of more predictable, trusted and transparent sources.

----------------------------------------------------------------- Support our work with a one-off-donation! -----------------------------------------------------------------

DNS-over-HTTPS (DoH) will introduce much-needed security and privacy features to web-browsing by looking up DNS requests made in the browser using a trusted DNS provider of that browser. The DNS is the internet architecture that ties a website address to the server where the content of that website is stored. This new feature will make transparent which DNS look-up service is being used.

By default all new DNS look-ups are handled by a chain of requests until an IP address for the website is found: first to one’s internet service provider (ISP), next to the closest DNS root server, then potentially to a cloud hosting provider and other intermediary servers until the
address is located and sent back to the user. The DNS information is stored back along the chain, from the ISP to one’s home router and finally in the web browser itself.

Private persons and consumers need a high level of technical skill to even find out who is providing their DNS, and it gets even more esoteric if they want to know privacy terms. Having a trusted DNS provider and knowing how to configure default DNS queries is even further beyond the reach of most users.

Every DNS request sends data about the website a user is visiting and in particular or in aggregate, this data can be used to infer behaviours of individual users or groups of users. Often internet usage statistics are made from DNS request data. It is a small and technically specific form of personal data collection that has not received much attention by EU regulatory authorities to date.

DoH will solve some of these issues, but at a considerable price that must be recognised. In practice, these privacy-enhancing changes will reduce the number of DNS look-up services that users are in contact with, yet since the trusted DNS services will be chosen by the browser, the new reduced number of DNS look-up services will be predominantly US-based.

As in so many internet issues, this is a trade-off between the parties who retain control over individual persons or consumers in a commercial and technical sense. In the case of Mozilla’s Firefox, the trusted DNS provider will be Cloudflare. If Chrome adopts DoH, the DNS provider is likely to be itself, i.e. Google.

If EU providers had to establish whether they would want to make a better privacy-by-design effort than Cloudflare and Google have already done then, according to Article 25 of the GDPR, they would have tobe the preferred choice for browsers in the EU. Data Protection Agencies would have to assess whether the browser makers have really opted for the most privacy enhancing DNS providers. However, as of today, there is nothing to suggest any EU DNS company would be able to credibly claim that they top Cloudflare on DNS privacy. Like it or not, the current plethora of DNS providers is not conducive to data privacy at all.

DNS discussions are currently ongoing at the Internet Engineering Task Force (IETF), the global standardisation community for low-layer internet protocols. EDRi member ARTICLE 19 is following the discussions on best practices for state-of-the-art privacy-by-design and data management.

DoH is, at least partially, a concrete and positive effect of EU leadership on data protection issues.Hopefully, it will serve to enhance protections of personal privacy while making internet back-end services less obscure. IETF standard setting will provide a benchmark for robust privacy protections in DNS. But these developments are also an example of how EU internet infrastructure organisations and their governors have some way to go before they can be at the top of the privacy game. The success of European global privacy leadership will be measurable by how it reacts to these necessary privacy enhancements.

Read more:

Improving DNS Privacy in Firefox

“Avskrivningsbeslut Säkerhetsbrister i kundplacerad utrustning” (Only in Swedish)

IETF DNS PRIVate Exchange (dprive) Working Group

(Contribution by Amelia Andersdotter and Mallory Knodel, EDRi member Article 19, United Kingdom)



29 Aug 2018

Can you do independent research without being independent?

By Bits of Freedom

Can you do independent research without being independent? The European Commission is evaluating how the rules on net neutrality have been implemented across Europe. These rules are designed to protect the rights of internet users. To our surprise, the evaluation is carried out by a law firm that frequently represents the big telecom providers that oppose net neutrality. Does that make sense?

The net neutrality rules are the rules that ensure that users, and not the provider, are free to decide what services to use online. These are the rules designed to make sure that “access to the internet” continues to mean “access to the entire internet”. The evaluation is vitally important, because many providers throughout Europe are starting to offer subscriptions where the traffic of certain services receives preferential treatment.

The study on which the evaluation will rely on has been awarded to the law firm Bird & Bird, in consortium with the research and consultancy company Ecorys. In EU Member States like the Netherlands, Bird & Bird represents most major telecom operators on matters related to the telecommunications regulatory framework, including net neutrality. For example, Bird & Bird represents T-Mobile in the pending court case EDRi member Bits of Freedom has initiated against the decision of the Dutch Regulatory Authority ACM not to take action against T-Mobile’s zero-rating offer. This court case revolves around the practice of zero-rating and the interpretation of the net neutrality rules that are also the subject of the study.

Although there is no reason to doubt the legal expertise and experience of Bird & Bird, one could and should have concerns with awarding this particular study to this law firm. Given the fact that this firm represents telecom operators in conflicts surrounding this legislation, there are reasonable questions to be raised about its independence and impartiality in conducting the study. It raises questions about the validity of the results and could be damaging to the credibility of, and the confidence in, the evaluation of the net neutrality provisions by the European Commission and the resulting measures taken by the European Commission.

A number of European organisations defending users and consumers have asked the European Commission to provide a written confirmation of the impartiality of this study. The confirmation should include a list of all measures taken by the European Commission and/or Bird & Bird to ensure the independence and impartiality of the evaluators conducting the study and the quality of the report. Particularly in light of these problems, it is vital that the Commission presents a balanced report based on the findings of Bird & Bird.

This article was originally published by EDRi member Bits of Freedom. It is available here. A version in Dutch is available here.

Read more:

Net Neutrality

Bits of Freedom’s court case about zero rating (06.08.2018)

15 organisations ask the European Parliament not to weaken net neutrality enforcement (27.04.2018)

Dutch ban on zero-rating struck down – major blow to net neutrality (17.05.2017)

(Contribution by Rejo Zenger, EDRi member Bits of Freedom, the Netherlands)

29 Aug 2018

Women on Waves: how internet companies police our speech

By Bits of Freedom

Increasingly, internet companies decide which content we’re allowed to publish and receive. Users have become passive participants in a Russian Roulette-like game of content moderation.

----------------------------------------------------------------- Support our work with a one-off-donation! -----------------------------------------------------------------

Three suspensions, three apologies

In January 2018, pro-choice organisation Women on Waves receives a message stating it has violated YouTube’s “community guidelines” and therefore its account has been taken down. The account of Women on Web, Women on Waves’ sister organisation, is suspended too. No specifics are offered, but they are no longer able to access their account or the content on it. They appeal through YouTube’s appeal mechanism, but nothing happens. They subsequently issue a press release which proves more effective: their accounts are reinstated.

Fast forward to April. Since the reinstatement of its account in January, Women on Waves hasn’t uploaded any new material. Yet their account is suspended again and for the same reason. Just like that, Women on Waves’ videos, available in a dozen different languages, are no longer accessible to people searching for reliable medical information on safe abortion. In Europe this couldn’t have come at a more inconvenient time, namely thirty days before Ireland’s abortion referendum.

Women on Waves’ suspension doesn’t go unnoticed. Where in January it was a press release that leads to the reinstatement of their accounts, in April it seems to be a number of tweets directed at YouTube and YouTube CEO Susan Wojcicki that cause YouTube to act. The result is the same: YouTube re-reviews the account and concludes Women on Waves isn’t in violation of its community guidelines. The account is put back online and along with it Women on Web’s account.

Sadly, a month later the same thing happens again: on June 15, one of Women on Web’s videos is taken down and soon the entire account follows. On June 16, Women on Web appeals the take-down – this is denied on June 18. Two days later, after we reach out to YouTube Netherlands, Women on Web receives a message informing them their account is being reinstated after all.


All sounds a bit tedious? We agree. Together with Women on Waves and Women on Web, we got in touch with YouTube Netherlands last May. We asked if they would share why and how the accounts were flagged, and how the decision was made to put them back online. We were told this is internal information that can’t be made public. Further probing was met with more deflection: YouTube takes down so much content each day, mistakes are bound to be made. Soit.

Users can’t rely on YouTube

Of course, YouTube isn’t the open internet. As a company, it decides what content is allowed and what isn’t. We’ve seen numerous examples of this, from its efforts to push certain types of content or accounts, to the “concealing” of LGBTQ+-related videos. And from the forced monetization of some accounts, to the demonetizing of other, “unfavourable”, ones. But as we’ve seen, the company doesn’t even always know itself what it finds unfavourable, so how are users supposed to anticipate its rulings? YouTube’s mission is “to give everyone a voice”; it doesn’t hesitate to rob you of it, either.

This whimsical decision making might be considered cute, if YouTube didn’t hold so much power. Because as it stands, YouTube is the one website people visit to search for video content. This position of power is reinforced by the fact that in many countries YouTube and telecom providers have struck deals to offer you traffic to “free of charge”. And don’t forget: in many countries visiting is a lot safer than visiting In other words, if you want your video content to be accessible, you need YouTube.

YouTube turns users into passive participants

Women on Waves’ accounts have been suspended and subsequently reinstated three times this year. Not because of YouTube’s complaint procedure, but because Women on Waves has a network of journalists and high profile followers that could draw attention to the ban and force YouTube to act. This might have worked now, and it might have worked for Women on Waves, but it can’t, and shouldn’t, be relied on to work in every case and for everyone. As long as YouTube doesn’t give more meaningful insight into their content moderation process, their users will remain on the sidelines.

If you don’t already have friends in high places (or at newspapers), start making them now – you’ll be needing them.

(Contribution by Evelyn Austin, EDRi member Bits of Freedom, the Netherlands)

This article was originally published on Bits of Freedom website. You can find the original article here.



29 Aug 2018

ENDitorial: The European Commission is talking “tough on terror”. Again.

By Joe McNamee

The European Commission plans to issue a Regulation on 12 September 2018 that will get tough on internet companies in the fight against terrorism. After all, somebody should do something, right? At the time of writing the title is “Regulation on preventing the dissemination of terrorist content online”.

----------------------------------------------------------------- Support our work - make a recurrent donation! -----------------------------------------------------------------

The launch date is significant:

– it is 96 hours after the European Commission’s most recent terrorism Directive is due to come into force (including measures on addressing terrorist content online).

– it is not quite a full year since the European Commission launched a press release on its Communication on Tackling Illegal Content Online, about getting internet companies to “tackle” illegal content

– it is slightly over six months since the European Commission launched a press release about its Recommendation on “effectively” tackling illegal content online

– it is somewhat more than two years since the Commission launched its press release on tackling (“illegal”) “hate speech” that in the meantime has produced no statistics about how much of the content being deleted is actually illegal or the actual impact of the measures.

Issues which will not be included in the European Commission’s upcoming press release on its new getting tough on internet companies Regulation include:

– the fact that it has no idea if or how many instances of serious crime and terrorism reported by Europol to internet companies ever get investigated or prosecuted (Europol also does not know if the reported content is actually illegal)

– the fact that neither the European Commission nor Europol know how much potential evidence is deleted by internet companies as a result of reports issued by Europol to internet companies

– the fact that the European Commission has failed miserably to collect meaningful data on the availability and removal of illegal child abuse material. This was severely criticised by the European Parliament, in a European Parliament Resolution passed by a vote of 597 votes in favour and six against.

The Regulation comes three months after the French (Collomb) and German (Seehofer) interior ministers sent a letter to the European Commission demanding that internet companies be made liable (as they already can be under the 2000 E-Commerce Directive) for failing to act quickly when they are made aware of illegal activity. In a separate development a few days after sending that letter, Interior Minister Collomb, failed to take action upon receiving a video containing apparent evidence of a serious assault. He reportedly told a parliamentary committee of enquiry that it wasn’t his job (Interior Minister) to do this. It is the job of internet companies to fight illegal activity, not the job of a national minister with responsibility for security, it appears.

It is, of course, a complete coincidence that the European Commission did exactly what the French and German ministers told them to do. Anything else would be a blatant breach of the oath of office of the College of Commissioners. Each Commissioner has solemnly undertaken “neither to seek nor to take instructions from any Government or from any other institution, body, office or entity”.

Read more:

Guide to the Code of Conduct on Hate Speech (03.06.2016)

The time has come to complain about the Terrorism Directive (15.02.2017)

Europol: Delete criminals’ data, but keep watch on the innocent (27.03.2018)



25 Jul 2018

Member in the spotlight: Hermes Center

By Hermes Center

In this edition of the “Member in the spotlight” series, EDRi is proud to present new member Hermes Center.

Who are you and what is your organisation’s goal and mission?

Hermes Center for Transparency and Digital Human Rights is an Italian civil rights organisation focusing on the promotion and development of the awareness and attention to transparency, accountability, freedom of speech online and, more in general, the protection of rights and personal freedom in a connected world.

How did it all begin, and how did your organisation develop its work?

The origins of Hermes can be traced back to the start of the GlobaLeaks project and its fundamental need for an organisation that advocates for privacy and digital human rights in Italy in an organised manner.

The Center has been formally registered in 2012 by bringing together the most active members of Italian privacy communities for years, all being part of the “Project Winston Smith” (PWS), the former “Anonymous Organisation”.

From its creation, Hermes immediately reunited a unique mix of activists, lawyers and hackers operating in the national and international context, with core activities ranging from software development of whistleblowing and anonymity tools up to very relevant digital rights advocacy activities like support of journalists, organisation of awareness-raising events, drafting of op-eds on these issues for national media, along with policy, advocacy and support for policy making.

Its main goals have been fully supported over the years by volunteers and by private donations for their works (conference presentations, free workshops, conferences and panel organisations).
Funding has mostly been provided by the Open Technology Fund in Washington and the Hivos foundation in Den Haag in form of research grants for its software development projects, GlobaLeaks and Tor2web.

The biggest opportunity created by advancements in information and communication technology is…

… the free flow of information, the possibility to connect instantaneously with people from all over the world to discuss and organise on common topics. Furthermore, these technologies can foster more transparency and accountability of the government and help citizens to defend their rights.

The biggest threat created by advancements in information and communication technology is…

… the pervasive state of surveillance that presents two different and somehow overlapping faces: surveillance capitalism by companies and police surveillance.

Which are the biggest victories/successes/achievements of your organisation?

We advocated for the adoption by the Italian Anti-Corruption Authority (ANAC) of an online whistleblowing platform using onion services, giving whistleblowers a secure way to report illegal activity and at the same time protect their identities.

With the same goals and similar activities, through a network of EU partners and mostly in collaboration with Xnet, the Center supported Barcelona City Hall, the Anti-Fraud Authority of the Catalonia, the Anti-Fraud Authority of Valencia and the Madrid City Hall in the adoption of free and open source technologies among which, first of all, the mentioned GlobaLeaks whistleblowing framework created by the Center which is now used worldwide by more than 300 organisations.

Additionally, together with other Italian organisations, we managed to push the Italian Ministry of the economic development (MISE) to revoke the export license of Area SpA, a well-known surveillance company.

If your organisation could now change one thing in your country, what would that be?

We would like to change the approach by politicians to the discussion around digital rights, and achieve greater civil society involvement.

What is the biggest challenge your organisation is currently facing in your country?

Last year, the Italian government introduced a new data retention law, extending the collection and retention of phone and telecommunication metadata to a period of 72 months. This measure is clearly against the CJEU case law on data retention. At the same time, we are concerned with government surveillance, which lacks a clear legal framework, and we are firmly opposing the adoption of electronic voting systems being discussed by the government.

How can one get in touch with you if they want to help as a volunteer, or donate to support your work?

You can reach us on Twitter, Facebook and visiting our website. There you can find our official contacts and those of our members. There is also the possibility of making a donation or volunteering.

Read more:

Edri member in the spotlight series



25 Jul 2018

ENDitorial: The fake fight against fake news

By Guest author

The new danger is no longer yellow, but red once more: fake news. It helped getting Trump elected. It paved the highway to Brexit. Even local European elections are not safe. The greatest danger to our democracy in modern times must be fought by all possible means.

----------------------------------------------------------------- Support our work - make a recurrent donation! -----------------------------------------------------------------

Fortunately, we have the European Commission. Laws are gradually replaced by terms of use, and courts of law by advertising agencies. The latter are tipped off by Europol and media companies when users break their rules. Trials are no longer necessary. Progress is only measured by how much and how quickly online content gets deleted. The Commission keeps up the pace with a rapid-fire of communications, covenants, press releases and directive proposals, and sees that all is well.
Unfortunately, the previous paragraph is not an attempt at satire. The only incorrect part is that the fake news hype is not the cause of this evolution. It does, however, fit in seamlessly.

Fake news avant la lettre

Fake news is old news. In his book “The Attention Merchants”, Tim Wu links its emergence to the rise of the tabloids in the 1830s. While most news papers cost six cent, the New York Sun only cost one cent. It grew quickly in popularity thanks to publishing an abundance of gruelling details on court cases, and was mainly financed by ads for “patent medicine” – commercial medicine based on quackery.

The rise of radio tells a similar tale. RCA, the maker of the first radios, launched NBC so its clients could listen to something with their new device. CBS, which started broadcasting at a later date, nevertheless quickly grew much bigger thanks to easy listening programming coupled to an expansive franchise model that enabled local stations to share in the ad revenue. Television reruns the same story, with Fox News that manages to reach a broad audience with little previous exposure to TV ads.

Tall stories, half truths, and sensational headlines are tried and tested methods used by media companies to sell more ads. On the internet every click on “Five Tips You Won’t Believe” banners also earns money for the ad agencies. However, so do visits to “Hillary’s Secret Concentration Camps”. In this sense the distribution of fake news through Facebook and Google always has been a natural part of their business model.

The doctor is expensive

Disinformation about a person is handled by defamation law. For specific historical events, like the holocaust, most European countries have laws that make its denial a criminal offence. Spreading certain other kinds of wrong information, however, is not illegal, like claiming that the Earth is flat.
Laws in this field are always contentious, given the tension with the right to free speech and the freedom of the press. Deciding what takes precedence is rarely obvious, and so normally a judge has the final word as to whether censorship is appropriate.

However, the courts are overloaded, money to expand them is lacking, and the amount of rubbish on the internet is gargantuan. Therefore legislators are eagerly looking at alternatives.

Let’s try self-medication

In recent years, the approach at the European level to relieve the courts has been one of administrative measures and self-regulation.

In 2011, article 25 of the Directive on Combating Sexual Exploitation of Children introduced voluntary website blocking lists at the European level. The goal was to make websites related to sexual child abuse unreachable in case closing them down or arresting the criminals behind them turned out unfeasible.

The 2010 revision of the Directive on Audiovisual Media Services (AVMS), originally intended for TV broadcasters and media companies, was broadened to also partially include services like YouTube. It requires sites that enable video sharing, and only those sites, to take measures against a.o. hate speech. A procedure to broaden this required policing is ongoing.

This fight was intensified by means of a Code of Conduct on Online Hate Speech. The European Commission agreed on it in 2016 with Facebook, Microsoft, Twitter and YouTube. These companies have accepted to take the lead in combatting this kind of unwanted behaviour.

The Europol regulation, also from May 2016, complements this code of conduct. It formalised Europol’s “Internet Referral Unit” (IRU) in article 4(m). Europol itself cannot take enforcement actions. As such, the IRU is limited to reporting unwanted content to the online platforms themselves “for their voluntary consideration of the compatibility of the referred internet content with their own terms and conditions.” The reported content need not be illegal.

The European Commission’s communication on Tackling Illegal Online Content from 2017 subsequently focussed on how online platforms can remove reported content as quickly as possible. Its core proposal consists of creating lists of “trusted flaggers”. Their reports on certain topics should be assumed valid, and hence platforms can check them less thoroughly before removing flagged content.

The new, and for now voted out, Copyright Directive made video sharing sites themselves liable for copyright infringements by their users, both directly and indirectly. This would force them to install upload filters. Negotiations between the institutions on this topic will resume in September.

Concerning fake news, the European Commission’s working document Tackling Online Disinformation: a European Approach from 2017 contains an extensive section on self-regulation. In January 2018, the Commission created a “High Level Working Group on Fake News and Online Disinformation”, composed of various external parties. Their final report proposes that a coalition of journalists, ad agencies, fact checkers, and so on, be formed to write a voluntary code of conduct. Finally, the Report on the Public Consultation from April 2018 also mentions a clear preference by the respondents for self-regulation.

Follow-up of the symptoms

At the end of 2016 (a year late), the European Commission published its first evaluation of the directive against the sexual exploitation of children. It includes a separate report on the blocking lists, but it does not contain any data on their effectiveness nor side effects. This prompted a damning resolution by the European Parliament in which it “deplores” the complete lack of statistical information regarding blocking, removing websites, or problems experienced by law enforcement due to erased or removed information. It asked the European Commission to do its homework better in the future.

However, for the Commission this appears to be business as usual. In January 2018, four months after its Communication on Tackling Illegal Online Content, it sent out a press release that called for “more efforts and quicker progress” without an evaluation of what had been done already. The original document moreover did not contain any concrete goals nor evaluation metrics, so it begs the question more and quicker than what these efforts should be exactly. This was followed up in March 2018 by a recommendation by that same European Commission in which everyone, except for the Commission and the Member States themselves, were called upon to further increase their efforts. The Commission now wants to launch a Directive on this topic in September 2018, undoubtedly with requirements for everyone but themselves to do even more and more quickly.

Referral to the pathogen

Online platforms have the right, within the boundaries of the law, to implement and enforce terms of use. What is happening now, however, goes quite a bit further.

More and more decisions on what is illegal are systematically outsourced to online platforms. Next, covenants between government bodies and these platforms include the removal of non-illegal content. The public resources and the authority of Europol are used to detect such content and report it. Finally, the platforms are encouraged to perform fewer fact check on reports from certain groups, and there are attempts to make the platforms themselves liable for their users’ behaviour. This would only make them more inclined to pre-emptively erase controversial content.

When a governmental body instates measures, these are always put to the test of the European Charter. The proportionality, effectiveness and subsidiarity need to be respected in the light of fundamental rights such as the right to free speech, no arbitrary application of the law, and the right to a fair trial. Not prosecuting certain categories of unwanted behaviour or not even making them illegal, and instead “recommending” online platforms to undertake action against them, undercuts these fundamentals of our rule of law in a rather blunt way.

Moreover, these online platforms are not just random companies. As the founders of Google wrote in their original academic article on the search engine:

For example, in our prototype search engine one of the top results for cellular phone is ‘The Effect of Cellular Phone Use Upon Driver Attention’… It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media, we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.

Meanwhile, Google has become one of the largest advertising agencies in the world. Facebook and Twitter also obtain a majority of their revenue from selling ads. If the Commission were to ask whether we shouldn’t outsource the enforcement of our fundamental right to ad agencies, the response would be presumably quite different than to the umpteenth hollow announcement about how internet platforms should address illegal online content more quickly and more thoroughly.

A difficult diagnosis

If we look specifically at fake news, there are two additional problems: no definition and few studies about it in its current form. Even the public consultation on fake news by the European Commission caused confusion by giving hate speech, terrorism and child abuse as examples of “already illegal fake news”. However, none of these terms refer to concepts that are necessarily fake or news. This makes it hard to draw conclusions from the answers, because it is unknown what the respondents understood under the term. The Eurobarometer survey on this topic has similar problems.

This does not mean that no information exists. The German NGO Stiftung Neue Verantwortung performed an experiment by spreading fake news through bought Twitter-followers. They drew some interesting conclusions:

Fake news is like memes: their essence is not their existence, but how well they are picked up. This means that blocking the source of popular fake news will not stop it from spreading;
Fake news is over by strongly linked users, while debunking it happens by a much more varied group of people. Hence, people with similar opinions seem to share the same fake news, but it seems to influence the general public less strongly.

An analysis by EDRi member Panoptykon on artificially increasing the popularity of Twitter messages on Polish politics led to compatible conclusions. There are bubbles of people that interact very little with each other. Each bubble contains influencers that talk about the same topics, but they seldom talk directly to each other. Prominent figures and established organisations, rather than robots (fake accounts), steer the discussions. Robots can be used to amplify a message, but by themselves do not create or change trends.

It is virtually impossible to distinguish robots from professional accounts by only looking at their network and activity. Therefore it is very hard to automatically identify such accounts for the purpose of studying or blocking them. These are only small-scale studies and one has to be careful to draw general conclusions from them. They certainly do not claim that fake news has no influence, or that we should just ignore it. That said, they do contain more concrete information than all pages on this topic published to date by the European Commission.

So what should we do? Miracle cures are in short supply, but there are a few interesting examples from the past. The year 1905 saw a revolution against patent medicine after investigative journalists exposed its dangers. Later, in the 1950s, tv quizes were found out to favour telegenic candidates due to their beneficial effect on ratings. Income steering content has been around since forever. Independent media should therefore be an important part of the solution.

The cure and the disease

The spectacular failure of the political establishment both in the US and the UK could impossibly have been their own undoing, so a different explanation was called for. Forget about the ruckus back in the day about Obama’s birth certificate or his alleged secret muslim agenda, David Cameron’s desperate ploy to cling to power, and the tradition of tabloids and their made-up stories in the UK. This is something completely different. Flavour everything with the dangers of online content and present yourself as the digital knight on the white horse that will set things straight. Or rather, that orders the ones making money from sensation and clickbait (such as fake news) to set things straight as they see fit.

The above is oversimplified, but it is incredible how this European Commission is casually promoting the Facebooks and Googles of this world to become the keepers of European fundamental rights. Protecting democracy and the rule of law is not a business model. It is a calling. One that few will attribute to Mark Zuckerberg.

This article originally appeared in Dutch on
Original Article: De nepstrijd tegen het nepnieuws


Read more:

Press Release: “Fake news” strategy needs to be based on real evidence, not assumption (26.04.2018)

ENDitorial: Fake news about fake news being news (08.02.2017)

(Contribution by Jonas Maebe, EDRi observer)



25 Jul 2018

New Protocol on cybercrime: a recipe for human rights abuse?


From 11 to 13 July 2018, the Electronic Frontier Foundation (EFF) and European Digital Rights (EDRi) took part in the Octopus Conference 2018 at the Council of Europe together with Access Now to present the views of a global coalition of civil society groups on the negotiations of more than 60 countries on access to electronic data by law enforcement in the context of criminal investigations.

----------------------------------------------------------------- Support our work - make a recurrent donation! -----------------------------------------------------------------

There is a global consensus that mutual legal assistance among countries needs to be improved. However, recognising its inefficiencies should not translate into bypassing Mutual Legal Assistance Treaties (MLATs) by going to service providers directly, thereby losing procedural and human rights safeguards embedded in them. Some of the issues with MLATs can be solved by, for example, technical training for law enforcement authorities, simplification and standarisation of forms, single points of contact or by increasing resources. For instance, thanks to a recent US “MLAT reform programme” that increased resources to handle MLATs, the US Department of Justice reduced the amount of pending cases by a third.

There is a worrisome legislative trend  emerging through the US CLOUD Act and the European Commission’s “e-evidence” proposals to access data directly from service providers. This trend risks creating a race to the bottom in terms of due process, court checks, fair trials, privacy and other human rights safeguards.

If the current Council of Europe negotiations on cybercrime focused on improving mutual legal assistance, they could offer an opportunity to create a human rights-respecting alternative to dangerous shortcuts as proposed in the US CLOUD Act or the EU proposals. However, civil right groups have serious concerns from a procedural and substantive perspective.

This process is being conducted without regular and inclusive participation of civil society, or data protection authorities. Nearly 100 NGOs wrote in April 2018 to the Council of Europe’s Secretary General because they are not duly included in the process. While the Council of Europe issued a response, civil society groups reiterated that civil society participation and inclusion goes beyond a public consultation, participation in a conference and comments on texts preliminary agreed by States. Human rights NGOs should be present in drafting meetings to learn from the law enforcement expertise of the 60+ countries and provide human rights expert input in a timely manner.

From a substantive point of view, the process is being built on the faulty premise that anticipated signatories to the Convention on cybercrime (“the Budapest Convention”) share a common understanding on basic protections of human rights and legal safeguards. As a result of this presumption, it is unclear how the proposed Protocol can provide for strong data protection and critical human rights vetting mechanisms that are embedded in the current MLAT system.

One of the biggest challenges in the Council of Europe process to draft an additional protocol to the Cybercrime convention – a challenge that was evident in the initial Cybercrime convention itself and in its article 15 in particular – is the assumption that signatory Parties share (and will continue to share) a common baseline of understanding with respect to the scope and nature of human rights protections, including privacy.

Unfortunately, there is neither a harmonised legal framework among the countries participating in the negotiations nor a shared human rights understanding. Experience shows that there is a need for countries to bridge the gap between national legal frameworks and practices on the one hand, and human rights standards established by case law of the highest courts on the other. For example, the Court of Justice of the European Union (CJEU) held that blanket data retention is illegal under EU law on several occasions. Yet, the majority of the EU Member States still have blanket data retention laws in place. Other states involved in the protocol negotiations have implemented precisely the type of sweeping, unchecked, and indiscriminate data retention regime that the CJEU ruled out as well, such as Australia, Mexico or Colombia.

As a result of a lack of a harmonised human rights and legal safeguards protection, the forthcoming protocol proposals risk:

– Bypassing critical human rights vetting mechanisms inherent in the current MLAT system that are currently used to, among other things, navigate conflicts in fundamental human rights and legal safeguards that inevitably arise between countries;

– Seeking to encode practices that fall below minimum standards being established in various jurisdictions by ignoring human rights safeguards established primarily by the case law of the European Court of Human Rights, the Court of Justice of the European Union, among others;

– Including few substantial limits and instead relying on the legal systems of signatories to include enough safeguards to ensure human rights are not violated in cross-border access situations and a general and non-specific requirement that signatories ensure adequate safeguards (see Article 15 of the Cybercrime Convention) without any enforcement.

Parties to the negotiations should render human rights safeguards operational – as human rights are the cornerstones of our society. As a starting point, NGOs urge countries to sign, ratify and diligently implement Convention 108+ on data protection. In this sense, EDRi and EFF welcome the comments of the Council of Europe’s Convention 108 Committee.

Finally, civil society groups urge the forthcoming protocol not to engage in a mandatory or voluntary direct access mechanism to obtain data from companies directly without appropriate safeguards. While the proposals seem to be limited to subscriber data, there are serious risks that interpretation of what constitutes subscriber data is expanded so as to lower safeguards, including access to metadata directly from providers by non-judicial requests or demands.

This can conflict clear court rulings from the European Court of Human Rights, such as the Benedik v. Slovenia case or even States’ case law, such as that of Canada’s Supreme Court. The global NGO coalition therefore reiterates that the focus should be put on making mutual legal assistance among countries more efficient.

Civil society is ready to engage in the negotiations. Until now however, the future of the second additional protocol to the Cybercrime Convention remains unclear, raising many concerns and questions.

Read more:

Joint civil society response to discussion guide on a 2nd Additional Protocol to the Budapest Convention on Cybercrime (28.06.2018)

How law enforcement can access data across borders — without crushing human rights (04.07.2018)

Nearly 100 public interest organisations urge Council of Europe to ensure high transparency standards for cybercrime negotiations (03.04.2018)

A Tale of Two Poorly Designed Cross-Border Data Access Regimes (25.04.2018)

Cross-border access to data has to respect human rights principles (20.09.2017)

(Contribution by Maryant Fernández Pérez, EDRi, and Katitza Rodríguez, EFF)



25 Jul 2018

The EU gets another opportunity to improve copyright rules

By Guest author

The EU gets another opportunity to improve a copyright proposal that would have threatened the open web. On 5 July 2018, Members of the European Parliament (MEPs) rejected the mandate to proceed with a flawed proposal for an EU Copyright Directive that would have had detrimental effects to internet freedom, access to knowledge, and collaboration online. This vote means that the text of the draft Directive is open again for amendments to be proposed and voted on by Parliament, likely at the next plenary session on 12 September 2018.

----------------------------------------------------------------- Support our work - make a recurrent donation! -----------------------------------------------------------------

The Wikimedia Foundation applauds the rejection of the mandate and the opportunity this offers for a wider discussion to create a balanced, modern copyright system for Europe. Prior to the vote on 5 July 2018, the Wikimedia Foundation Board of Trustees, more than ten Wikipedia language communities, and many Wikimedia chapters in Europe took a stand against the draft Directive. Some Wikimedians even took to the streets to protest and educate the public.

Wikipedia’s chapters ran messages asking European users to contact their MEPs or went dark for a day to protest the proposal. Unlike other online platforms, Wikipedia’s global community of volunteer editors makes decisions about how they want the site to be involved in policy debates that impact Wikipedia, access to knowledge, and the free and open internet. The banners and blackouts protesting the draft Directive were the direct result of Wikipedia’s volunteer editors making their voices heard through Wikipedia’s democratic decision to make a statement.

Of course, the Wikimedia movement was not alone in our protest to the Directive. Countless civil society stakeholders, technologists, creators, and human rights defender also spoke out against the proposal.

Wikimedia believes strongly that the draft Directive went too far to expand copyright and failed to provide critical protections for material that should be both free and public. The Foundation fears that the overbroad requirements in the draft Directive would actually lead to further dominance of the internet by large companies that have the resources to build burdensome filtering systems. This runs contrary to the vision of an open internet that fostered Wikipedia’s creation.

Now that the proposed copyright Directive is open for amendments once again, it is time to support improvements that harmonise copyright across the EU and preserve basic online freedoms. A groundswell of concern over this proposal led to its rejection in Parliament.

Let’s make sure that the next round of amendments takes these concerns into account and results in a copyright directive which will truly deliver a free and open internet for all.

Read more:

How the EU copyright proposal will hurt the web and Wikipedia (02.07.2018)

Wikipedia blackout (04.07.2018),_04.07.2018,_en.wp.jpg

Press Release: EU Parliamentarians support an open, democratic debate on Copyright Directive (05.07.2018)

(Contribution by Jan Gerlach, EDRi-member Wikimedia Foundation)



25 Jul 2018

Censoring Wikipedia in Turkey is censoring our collective knowledge

By Guest author

2018 will be a pivotal year for the internet. For the first time in human history, over half of the world’s population will be online. But new threats threaten the sustainability of the world’s largest information source.

 ----------------------------------------------------------------- Support our work with a one-off-donation! -----------------------------------------------------------------

By the end of this year, for the first time, one out of every two humans will have access to the internet. This tremendous milestone represents over 40 years of continuous investment in creating an open space where a unique culture of collaboration and sharing have thrived – and represents the grand sum of the biggest human experiment to democratize and disperse knowledge in our history. In this exciting age of the internet, new business models have been emerged, while our perspective on how we work, live and play has been re-shaped to match the growth of the web. But there are some worrying developments taking place on the global stage right now that could threaten the natural growth of the internet as it progresses its role as a great equalizer to access.

More censorship in many places

Everywhere we look, online censorship is becoming more prevalent, increasingly insidious, and worryingly surreptitious.

In the last few months, we have seen Russian and Iranian governments both continue with numerous attempts to block Telegram, making the argument that free speech on the platform enables unfavourable discussions against their governments. Vietnam just passed a law that will further limit the ability of people to use the internet as an avenue to free discourse; while in Uganda, a new law passed last month will tax users of social media for ‘gossip’ on social channels. As the world’s largest, collaboratively-built platform with over 46 million articles, Wikipedia has been monitoring these developments closely. We are increasingly concerned about the fact that for the last 12 months, Wikipedia remains banned in Turkey, denying the world information about one of the world’s  most vibrant economies.

Turkey block is far-reaching

The Turkish block is the most expansive government ban ever imposed on Wikipedia, and includes Wikipedias across nearly 300 languages. The court order imposing the ban is based on two articles in English Wikipedia which the court said damaged the reputation and prestige of the Republic of Turkey. While we respectfully disagreed with the court’s decision as it applied to the articles at the time the block was imposed, we also wish to point out that those articles remain open to editing, according to Wikipedia’s neutral editorial policies, and have been changed substantially by Wikipedia volunteer editors since the block was imposed.

Critically, the ban brings to focus the fact that all the content on Wikipedia is built through the collaborative efforts of millions of people who create content for the benefit of the world at large.
The unique value of Wikipedia is that it is a collaboration of hundreds of thousands of people across the globe. Together they make decisions about what information to include in Wikipedia and how that information is presented. The editing process proceeds according to policies developed and overseen by these independent volunteer editors. Wikipedia’s policies require reliable sources to verify information included in Wikipedia, and neutrality, especially when covering controversies in which there are differing views. This is an ongoing process and means that Wikipedia articles are under constant improvement. It is a process that benefits from more editors and differing perspectives, which is one of the reasons why ending the block in Turkey is so important.

This manner of creating and improving content remains to this day the most powerful and unique contribution of Wikipedia to the internet. When more people participate on Wikipedia, the more neutral, reliable, and accurate its articles become. The Wikimedia Foundation and independent Wikipedia volunteer editors offered to provide open, public training on Wikipedia in Turkey once the block is lifted, as we have done in other countries, with the goal of increasing the number of editors and perspectives on Wikipedia.

Lift the ban!

This is why we continue to respectfully request that the government lift the ban so that Wikipedia can return to serving as a valuable, free educational resource on a wide range of topics, including science, engineering, art, and culture. We would like to see the Turkish people able to contribute to the global conversation, including Turkish topics, on Wikipedia. As we have repeatedly noted, we are all made poorer for the absence of contributions by the Turkish people.

And at a pivotal moment such as this, when more people are getting online in search of freely accessible information, these kind of actions can only set back the remarkable progress that the internet has made over the last 40 years. When we censor information on Turkey, we censor our collective history as humans.

This article was written by Eileen Hershenov, General Counsel for the Wikimedia Foundation, and published at under a Creative Commons BY-NC-SA 4.0 licence.

Original article: In Censoring Wikipedia in Turkey, We Are Censoring Our Collective Knowledge (15.07.2018)

Read more:

Half the world’s population is still offline. Here’s why that matters. (30.05.2018)

Viet Nam: New Cybersecurity law a devastating blow for freedom of expression (12.06.2018)

Social media use taxed in Uganda to tackle ‘gossip’ (01.06.2018)

Wikimedia Foundation urges Turkish authorities to restore access to Wikipedia (30.04.2018)



25 Jul 2018

EU Council considers undermining ePrivacy

By IT-Pol

On 19 October 2017, the European Parliament’s LIBE Committee adopted its report on the ePrivacy Regulation. The amendments improve the original proposal by strengthening confidentiality requirements for electronic communication services, a ban on tracking walls, legally binding signals for giving or refusing consent to online tracking, and privacy by design requirements for web browsers and apps.

----------------------------------------------------------------- Support our work - make a recurrent donation! -----------------------------------------------------------------

Before trialogue negotiations can start, the Council of the European Union (the Member States’ governments) must adopt its general approach. This process is still ongoing with no immediate end in sight. An analysis of the proposed amendments in Council documents so far shows that the Council is planning to significantly weaken the ePrivacy text compared to the Commission proposal and, especially, the LIBE report.

Metadata for electronic communications should be regarded as sensitive personal data, similar to the categories listed in Article 9 of the General Data Protection Regulation (GDPR). Under the ePrivacy Directive (current legal framework), necessary metadata may be processed for purposes of subscriber billing and interconnection payments, and, with consent of the user, for value added services. Apart from data retention requirements in national law, no other processing is allowed. In the ePrivacy Regulation, the Commission proposal and the LIBE text both uphold the principle of only allowing processing of electronic communications metadata for specific purposes laid down in law or with consent of the end-user. As a new specific purpose, processing for monitoring quality of service requirements and maintaining the availability of electronic communications networks can be done without consent.

The Council proposals significantly expand the permitted processing of metadata without consent by the electronic communications service (ECS) provider. The billing/interconnection purpose is extended to include processing when it is necessary “for the performance of a contract to which the end-user is party”. This will allow the ECS provider to process metadata not directly related to billing through provisions in the contract with the end-user. Service offerings by ECS providers are generally moving towards simpler products with increased reliance on flat rate tariffs, which should reduce the processing and storage of metadata necessary for billing purposes. These privacy benefits will be lost with the Council text.

In December 2017, the Council proposed further processing of metadata without consent for scientific research or statistical purposes based on Union or Member State law. Despite the mandatory safeguards, which include encryption and pseudonymisation, this is a very problematic amendment since a potentially large amount of metadata, which would otherwise be deleted or anonymised, will be retained and stored in identifiable form. Data breaches and law enforcement access are two very specific data protection risks created by this amendment.

The latest text from the Austrian Presidency (Council document 10975/18) goes even further than this by proposing a new general provision for further processing of metadata for compatible purposes inspired by Article 6(4) of the GDPR. This comes very close to introducing “legitimate interest” as a legal basis for processing metadata by the ECS provider, something that has previously been ruled out because metadata for electronic communications is comparable to sensitive personal data under the case law of the Court of Justice of the European Union (CJEU). GDPR Article 9 does not permit the processing of sensitive personal data with legitimate interest as the legal basis. In March 2018, the former Bulgarian Presidency specifically noted that it is highly doubtful whether a non-specific provision for permitted processing would, given the sensitive nature of the data involved, be in line with the case-law of the CJEU.

The LIBE Committee adopted amendments to ensure that electronic communications content was protected under the ePrivacy Regulation during transmission and if the content is subsequently stored by the ECS provider. This is important because storage of electronic communications content is an integral part of many modern electronic communications services, such as webmail and messenger services. However, the Council amendments limit the protection under the ePrivacy Regulation to the transmission of the communication, a period which may be a fraction of a second. After the receipt of the message, the processing falls under the GDPR which could allow processing of personal data in electronic communications content (such as scanning email messages) based on legitimate interest rather than consent of the end-user. As suggested by the Council recital, the end-user can avoid this by deleting the message after receipt, but this would entirely defeat the purpose of many modern electronic communications services.

In Article 8 of the draft ePrivacy Regulation, the LIBE Committee adopted a general ban on tracking walls. This refers to the practice of making access to a website dependent on end-user consent to processing of personal data through tracking cookies (or device fingerprinting) that is not necessary for the provision of the website service requested by the end-user. This practice is currently widespread since many websites display cookie consent banners where it is only possible to click ‘accept’ or ‘OK’.

The Council text goes in the opposite direction with proposed wording in a recital which authorises tracking walls, in particular if a payment option is available that does not involve access to the terminal equipment (e.g. tracking cookies). This amounts to a monetisation of fundamental rights, as EU citizens will be forced to decide whether to pay for access to websites with money or by being profiled, tracked and abandoning their fundamental right to protection of personal data. This inherently contradicts the GDPR since consent to processing of personal data can become the counter-performance for access to a website, contrary to the aim of Article 7(4) of the GDPR.

Finally, the latest text from the Austrian Presidency proposes to completely delete Article 10 on privacy settings. Article 10 requires web browsers and other software permitting electronic communications to offer privacy settings which prevent third parties from accessing and storing information in the terminal equipment, and to inform the end-user of these privacy settings when installing the software. An example of this could be an option to block third party cookies in web browsers. Such privacy settings are absolutely critical for preventing leakage of personal data to unwanted third parties and for protecting end-user privacy when consent to tracking is coerced through tracking walls. The recent Cambridge Analytica scandal should remind everyone, including EU Member States’ governments, of the often highly undesirable consequences of data disclosures to unknown third parties.

If Article 10 is deleted, it will be possible to offer software products that are set to track and
invade individuals’ confidential communications by design and by default, with no possibilities for the individual to change this by selecting a privacy-friendly option that blocks data access by third parties. This goes in the complete opposite direction of the LIBE report, which contains amendments to strengthen the principle of privacy by design by requiring that access by third parties is prevented by default, and upon installation to ask the end-user to either confirm this or select another, possibly less privacy-friendly, option.

The rationale for deleting Article 10 given by the Austrian Presidency is the burden on software vendors and consent fatigue for end-users. The latter is somewhat ironic since technical solutions, such as genuine privacy by design requirements and innovative ways to give or refuse consent, like a mandatory Do Not Track (DNT) standard, are needed to reduce the number of consent requests in the online environment. The Council amendments for articles 8 and 10 would aggravate the current situation, where end-users on countless websites are forced to give essentially meaningless consent to tracking because the cookie banner only provides the option of clicking ‘accept’.

If the ePrivacy amendments in 10975/18 and earlier Council documents are adopted as the general approach, Council will enter trialogue negotiations with a position that completely undermines the ePrivacy Regulation by watering down all provisions which provide stronger protection than the GDPR. This would put a lot of pressure on the European Parliament negotiators to defend the privacy rights of European citizens. For telecommunications services, which presently enjoy the strong protection of the ePrivacy Directive, the lower level of protection will be particularly severe, even before considering the dark horse of mandatory data retention that EU Member States are trying to uphold, in part through amendments to the ePrivacy Regulation.

EDRi, along with EDRi members Access Now, Privacy International and IT-Pol Denmark, have communicated their concerns about the proposed Council amendments though letters to WP TELE, as well as a civil society meeting with Council representatives on 31 May 2018 organised by the Dutch Permanent Representation and the Bulgarian Council Presidency.

Read more:

e-Privacy: What happened and what happens next (29.11.2017)

EU Member States fight to retain data retention in place despite CJEU rulings (02.05.2018)

ePrivacy: Civil society letter calls to ensure privacy and reject data retention (24.04.2018)

Civil society calls for protection of communications confidentiality (13.06.2018)

Civil society letter to WP TELE on the ePrivacy amendments in Council document 10975/18 (13.07.2018)

(Contribution by Jesper Lund, EDRi-member IT-Pol)