21 Nov 2017

The Civil Liberties Committee rejects #censorshipmachine

By Diego Naranjo

On 20 November 2017, the European Parliament (EP) Committee on Civil Liberties, Justice and Home Affairs (LIBE) voted against the mandatory implementation of “censorship machines” (aka upload filters) in its Opinion on the Copyright Directive proposal. After a long process and diligent hard work led by Polish Members of the European Parliment (MEP) Michal Boni (EPP) and the shadow rapporteurs trying to achieve the best compromise in the LIBE Opinion, the Committee voted to keep Mr Boni’s original proposal, together with two further amendments (on hyperlinks and “notice and takedown”).

The Opinion proposes removing the most harmful parts of the proposal:

  1.  the obligation to filter every single upload to the internet using content recognition technologies; and
  2. a clarification that measures to ensure enforcement of licensing arrangements should not include general monitoring obligations for internet companies.

The balanced approach by Mr Boni MEP and the rest of the Committee is welcome. However, a stronger approach – the deletion of Article 13 – would have sent a clearer signal on the risks of the proposal to the Committee on Legal Affairs (JURI). JURI is in charge of writing the final Report on behalf of the European Parliament.

Apart from removing those two harmful aspects, LIBE also inserted a new paragraph (2a). The paragraph suggests that users would have access to “a court or another competent authority”, to react to potential unlawful actions against their online content. This essentially restates current law. On top of that, an amendment from Daniel Dalton MEP defending that “hyperlinking to an already publicly available content does not constitute a communication to the public” for that source, is one of those pieces of common sense that seem to be hard to pass in the EP.

On the less positive side of the text, LIBE has passed an amendment which, similarly to the original proposal from the Commission, asks that information society service providers “should take appropriate and proportionate measures to ensure the protection of works”. The text surprisingly suggests that such measures “should” respect the Charter of Fundamental Rights of the European Union, even though such measures would be taken by private companies, who are not bound by the Charter.

Despite not deleting Article 13, LIBE has has firmly opposed the imposition of censorship machines in the copyright proposal. After this vote, the next and last parliamentary step before a probable launch of closed-door “trilogue” negotiations with the Member States in the Council is the vote in JURI (the lead Committee on this proposal) which is scheduled for 24/25 January 2018.

Before this final vote, there is still time to act on this and make sure that no censorship machines that limit citizens’ freedom of expression are established in the EU.

Civil society calls for the deletion of the #censorshipmachine (16.10.2017)

Leak: Three EU countries join forces for restrictions & copyright chaos (26.11.2017)

New Estonian Presidency “compromise” creates copyright chaos (03.11.2017)

Copyright reform: Document pool

(Contribution by Diego Naranjo, EDRi)


15 Nov 2017

Member Spotlight: Panoptykon Foundation

By Panoptykon Foundation

This is the eighth article of the series “EDRi member in the Spotlight” in which our members introduce themselves and their work in depth.

Today we introduce our Polish member Panoptykon Foundation.

1. Who are you and what is your organisation’s goal and mission?

The Panoptykon Foundation is the only organisation in Poland that monitors state agencies and corporations that collect massive amounts of data, and has been doing so since 2009. We carry out investigations, monitor the legislative process, make legal interventions and inspire public debate. We help people regain control over their own data.

2. How did it all begin, and how did your organisation develop its work?

The Panoptykon Foundation was established in April 2009 upon the initiative of a group of engaged lawyers, to express their opposition to surveillance. After one year of after-hours voluntary work of two founders – Katarzyna Szymielewicz and Małgorzata Szumańska, the organisation received first project grants. Today, the Panoptykon Foundation is a well-integrated team of professionals, with a long-term strategy and a track record of significant successes in watchdog and awareness raising activity.

3. The biggest opportunity created by advancements in information and communication technology is…

Combating exclusion. Equal opportunities to access to information and knowledge.

4. The biggest threat created by advancements in information and communication technology is…

For us as individual citizens/consumers: losing control over our own data.

5. Which are the biggest victories/successes/achievements of your organisation?

Before Panoptykon there was no public debate on surveillance in Poland. We generated sustained media interest in topics such as uncontrolled use of telecommunication and internet data by intelligence services, integration of public databases, pervasive use of video surveillance (including in schools), the use of profiling in the context of public services, data transfers in trade agreements, and European data protection reform. Some of the issues that we framed triggered massive public mobilisation (for example protests against Anti-Counterfeiting Trade Agreement, ACTA, in 2012).

6. If your organisation could now change one thing in your country, what would that be?

We would like people to be more resistant to the policy of fear: to contest the rhetoric of limiting their rights and freedoms to “protect them from terrorist” .

7. What is the biggest challenge your organisation is currently facing in your country?

The mistreatment of fundamental rights and the rule of law by the decision makers. Polarisation of the public debate: you have to be either pro or against the government, you cannot support the good ideas and criticise the bad ones, you have to declare yourself on one side or the other.

8. How can one get in touch with you if they want to help as a volunteer, or donate to support your work?

Visit en.panoptykon.org or panoptykon.org (in Polish). Contact us through e-mail: fundacja@panoptykon.org

EDRi member in the spotlight series

(Contribution by Anna Obem, EDRi member Panoptykon Foundation, Poland)



15 Nov 2017

Who defends the victims of mass surveillance? Tech companies could

By Electronic Frontier Foundation

Two clocks are ticking for US tech companies in the power centers of the modern world. In Washington, lawmakers are working to reform the Foreign Intelligence Surveillance Act (FISA) Section 702 before it expires on 31 December 2017. Section 702 is the main legal basis for US mass surveillance, including the programs and techniques that scoop up the data transferred by non-US individuals to US servers. Upstream surveillance collects communications as they travel over the internet backbone, and downstream surveillance (better known as PRISM) collects communications from companies like Google, Facebook, and Yahoo.

Image: EFF (CC-BY)

Both programmes have used Section 702’s vague definitions to justify the wholesale seizure of internet and telephony traffic: any foreign person located outside the United States could be subjected to surveillance if the government thinks that surveillance would acquire “foreign intelligence information”—which here means information about a foreign power or territory that “relates to […] the national defense or the security [or] the conduct of the foreign affairs of the United States.”

Without fixes to Section 702’s treatment of foreign users, the customers of American internet services will continue to have personal information and communications sucked up, without limit, into American intelligence agency databases.

Meanwhile, in Luxembourg, at the heart of the EU, the Court of Justice of the European Union (CJEU) is due to take a renewed look at how US law protects the privacy rights of European customers, and decide whether it is sufficiently protective for American companies to be permitted to transfer European personal data to servers in the United States.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

The two ticking timers are inextricably linked. Last time the CJEU reviewed US privacy law, in Schrems v. Data Protection Commissioner, they saw no indication that the US mass surveillance programme was necessary or proportionate, and noted that foreign victims of surveillance had no right of redress for its excesses. US law, they stated, was insufficient to protect Europeans, and declared the EU-US Data Protection Safe Harbor agreement void, instantly shutting down a major method for transferring personal data legally between the US and Europe.

Now another similar case is currently weaving through the courts for review by the CJEU. Without profound changes in US law, its judges will almost certainly make the same decision, stripping away yet more methods that US Internet companies might have to process European customers’ data.

This time, though, it won’t be possible to fix the problem by papering it over (as the weak Privacy Shield agreement did last time). The only long-term fix will be to give non-Americans the rights that European courts, and international human rights law expect.

Sadly, no company has yet stepped forward to defend the rights of their non-American customers. At the beginning of November, Silicon Valley companies, including Apple, Facebook, Google, Microsoft and Twitter, wrote a lukewarm letter of support for the USA Liberty Act, characterising this troublesome surveillance reauthorisation package as an improvement to “privacy protections, accountability, and transparency.” The companies made no mention of the rights of non-Americans who rely on US companies to process their data.

The USA Liberty Act reauthorises the National Security Agency (NSA) surveillance programmes for six years and makes some adjustments to government access to American communications. But the bill fails to include any legal protections for innocent foreigners abroad. Instead, the bill offers a “sense of Congress” — a statement about Congressional intention with no legal weight or enforceability — that NSA surveillance “should respect the norms of international comity by avoiding, both in actuality and appearance, targeting of foreign individuals based on unfounded discrimination.”

Previous discussions of 702 reform included demanding better justifications for seizing data. The law could, at the very least, better define “foreign intelligence” so that not every person in the world could potentially be considered a legitimate target for surveillance.

Based on these ideas, the companies could call for substantively better treatment of their foreign customers, but they have chosen to say nothing. Why? It may be that they feel that it is unlikely that such protections would pass the current Congress. But such reforms definitely won’t pass Congress unless they are proposed or supported by major Washington players like the tech giants. Much of the existing statutory language of US surveillance reform, in the USA Freedom Act and now in the USA Liberty Bill, was unimaginable until advocates spoke up for it.

The other reason may be that it’s safer to keep quiet. If the tech companies point out that Section 702’s protections are weak, then that will draw the attention of the European courts, and undermine the testimony of Facebook’s lawyers in the Irish courts that everything is just fine in American surveillance law.

If so, the companies are engaged in dangerous wishful thinking, because that ship has already sailed. In the early stages of the current CJEU court case, in the Irish High Court, Facebook and the US government both argued that current US law was sufficiently protective of foreigners’ privacy rights. They lost that argument. And without US legal reform, they’re almost certain to lose at the CJEU, the next port of call for the case. The companies need to remember what that court said in the first Schrems decision:

“Legislation permitting the public authorities to have access on a generalised basis to the content of electronic communications must be regarded as compromising the essence of the fundamental right to respect for private life, as guaranteed by Article 7 of the Charter of Fundamental Rights of the European Union.

Likewise, legislation not providing for any possibility for an individual to pursue legal remedies in order to have access to personal data relating to him, or to obtain the rectification or erasure of such data, does not respect the essence of the fundamental right to effective judicial protection, as enshrined in Article 47 of the Charter of Fundamental Rights of the European Union.”

In other words, it’s not American business practices that need to change: ­ it’s American law. Section 702 reform, currently being debated in Congress, is the Internet companies’ last chance to head off the chaos of a rift between the EU and the US. By pushing for improvements for non-US persons in the proposed bills renewing Section 702 (or fighting for Section 702 to be rejected outright), they could stave off the European court’s sanctions ­ and reassure non-American customers that they really do care about their privacy.

There is still time, but the clocks are ticking. If America’s biggest businesses step up and tell Congress that the privacy of non-Americans matters, that reform bills like the Liberty Act must contain improvements in transparency, redress, and minimization for everyone, not just Americans, they’ll get an audience in Washington.

They will also be heard in the rest of the world. Since the Snowden revelations, non-American customers of US internet communication providers have repeatedly asked them: “How can we trust you? You say you have nothing to do with PRISM, and you zealously protect your users’ data. But how do we know when the US government comes knocking, you’ll have your foreign users’ backs?”

Standing up in D.C. and speaking for the rights of their customers would send a powerful message that American companies believe that non-American Internet users have privacy rights too, no matter what American lawmakers currently believe.

Staying quiet sends another signal entirely: that while they might prefer a world where the law protects their foreign customers, they’re unwilling to make a noise to make that world a reality. Their customers — and competitors — will draw their own conclusions.

This article was originally published at https://www.eff.org/deeplinks/2017/10/tech-companies-could-fight-non-us-surveillance.

A Coalition Says to Congress: End 702 or Enact Reforms (06.06.2016)

Europe’s Courts Decide: Does U.S. Spying Violate Europe’s Privacy? (03.10.2017)

(Contribution by Danny O’Brien, EDRi member Electronic Frontier Foundation)



15 Nov 2017

High time: Policy makers increasingly embrace encryption

By Bits of Freedom

Encryption is of critical importance to our democracy and rule of law. Nevertheless, politicians frequently advocate for weakening this technology. Slowly but surely, however, policy makers seem to start embracing it.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Encryption is essential for the protection of our digital infrastructure and enables us to safely use the internet – without it, our online environment would be a more dangerous one. Thanks to encryption, companies can better protect our personal data online and internet users can safely communicate and exchange information. This makes encryption of the utmost importance not only for our democratic liberties, but also for innovation and economic growth.

Our governments should therefore stimulate the development and implementation of encryption, more than they currently do. It is without doubt undesirable when governments force companies to create backdoors in their encryption technologies, or to incorporate other ways of weakening it. Policy makers generally grapple with this position though, as they face pressure from police and security services.

Fortunately, in 2016, the Dutch government came to the same conclusion. It rightfully determined that “cryptography plays a key role in the technological security of the digital domain”. It further stated that there were “no viable options to weaken encryption technology in general without compromising the safety of digital systems that utilise it”. Put differently, creating a backdoor for the police also creates a backdoor for criminals. Because of this, the Dutch cabinet argues that it is “undesirable to implement legislative measures that would hamper the development, availability and use of encryption in the Netherlands”.

Then again, the Netherlands is only a small country and much of its legislation is determined by the decisions made at the European level. It is therefore heartening to see that the European Parliament passed a resolution in early November 2017, calling on the European Commission and the member states to “enhance security measures, such as encryption and other technologies, to further strengthen security and privacy”. The Parliament also explicitly asked EU Member States to refrain from “enforcing measures that may weaken the networks or services that encryption providers offer, such as creating or encouraging ‘backdoors’”.

The European Commission has also spoken out on the issue. It recently published “Eleventh progress report towards an effective and genuine Security Union”, which lists measures meant to make Europe safer. One of these measures entails supporting law enforcement in dealing with encrypted information. However, the report immediately adds that this should be done “without prohibiting, limiting or weakening encryption”, since “encryption is essential to ensure cybersecurity and the protection of personal data”.

This definitely does not mean it will be smooth sailing from here on. Political positions change rapidly. The Dutch government, for example, states explicitly that weakening encryption is undesirable “at this moment in time”. All it takes for our political leaders to collectively lose their resolve is one serious terrorist attack after which law enforcement and security services investigations are hindered by encryption. It is also hard to predict how Dutch and European lawmakers will respond when pressure mounts from France, Germany or the United States.

The biggest threat, however, is probably far more subtle. Businesses are often pressured to “take their social responsibility” in fighting whatever is seen to be evil at that particular time. They are told: “You don’t want to be seen as a safe haven for terrorists, do you?” The consequence of this is that far too often, these businesses agree to make their digital infrastructure more vulnerable, without any checks or balances. This cooperative attitude is of course adopted “willingly” – but not without pressure from legislation or fear of damage to their reputation. The proposal of the European Commission in its recent policy document to create a “better and more structured collaboration between authorities, service providers and other industry partners” should be read in this light.

The European Commission struggles to find a position on encryption (31.10.2017)

EU’s plans on encryption: What is needed? (16.10.2017)

EDRi delivers paper on encryption workarounds and human rights (20.09.2017)

EDRi position paper on encryption (25.01.2016)

Encryption – debunking the myths (03.05.2017)

(Contribution by Rejo Zenger, EDRi-member Bits of Freedom, the Netherlands; translation by David Uiterwaal)



15 Nov 2017

Estonian eID cryptography mess – 750000 cards compromised

By Joe McNamee

In 2017, a flaw causing vulnerabilities in millions of encryption keys, including national Estonian electronic ID (eID) cards, was discovered. A month and a half after the discovery, the Estonian Police publicly announced the vulnerability, but stated that the eID cards “are completely secure”.

What is public key cryptography?

Firstly, the issue is about public key cryptography. Using public key cryptography, a message is encrypted and decrypted. This encryption relies on a public key and a matching private key. The sender of a message gets the recipient’s public key. The sender uses the public key to encrypt the message and then sends it. The recipient then decrypts the message using the private key that matches the public key that was used to encrypt the message.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

In eID systems, the private key is the eID. The private key is used by encrypting a unique numerical representation of a digital file, a checksum, of any document that is to be signed. This checksum proves the authenticity and the integrity to a recipient of such a cryptographically signed document. The encrypted checksum is often called a certificate. This would also be the basis for voting using electronic voting systems.

So, what could go wrong? Well, if there is a vulnerability that allows the public key to be used to create the corresponding private key, anyone can create certificates of documents in someone else’s name, and the system is fundamentally compromised. This increases the risk of ID theft, the manipulation of elections and any other government system that relies on eID.

What happened in Estonia?

In 2014, an independent report found that Estonia’s e-voting system was vulnerable due to its security architecture being based on an obsolete threat model. There were “abundant lapses in operational security and procedures” including “numerous lapses in the most basic security practices” and that the voting system itself had severe vulnerabilities. The report recommended the withdrawal of e-voting.

Now, in 2017, it was discovered that a flaw in a widely used code library has caused vulnerabilities in millions of encryption keys, including national ID cards. These included the electronic ID systems of Estonia and Slovakia. In October 2017, a month and a half after being privately informed of the vulnerability, the Estonian Police announced that that a “vulnerability potentially affecting digital use of Estonian ID cards” had been identified. However, they added, the ID cards “are completely secure”.

The Estonians subsequently backed away from this position and the certificates associated with the 750 000 affected ID cards were suspended. The cards are now being updated with new credentials.

Coincidentally, two weeks after being notified of the vulnerability, the Estonian Presidency of the Council of the European Union circulated a draft “compromise” on the proposal for an e-Privacy Regulation. It included the suggestion to delete Article 17 of the Commission’s proposal on “information about detected security risks”:

“In the case of a particular risk that may compromise the security of networks and electronic communications services, the provider of an electronic communications service shall inform end-users concerning such risk and, where the risk lies outside the scope of the measures to be taken by the service provider, inform end-users of any possible remedies, including an indication of the likely costs involved”.

It could be argued that Estonia has provided a huge service to the European Union. As a small EU Member State, and as current holder of the rotating Presidency of the Council of the European Union, it has, at its own cost, shown the other Member States the huge security issues around sensitive e-government projects. With countries like Ireland launching both a new national ID card system (which is “mandatory but not compulsory“) and planning yet another e-voting system, it seems unable to learn from its own failures, lessons are there to be learned.

Estonian Police notice (13.11.2017)

Digital ID cards now only work with new certificates (03.11.2017)

Initial Estonian Police statement: For the user of ID-card and mobile ID (13.11.2017)

Estonian Presidency e-Privacy “compromise” (08.09.2017)

Millions of high-security crypto keys crippled by newly discovered flaw (16.10.2017)

Independent Report on E-voting in Estonia

Estonian eID article – additional information (22.11.2017)

(Contribution by Joe McNamee, EDRi)



15 Nov 2017

The Dutch continue to fight new mass surveillance law

By Bits of Freedom

On 4 November 2017, 20 000 households in the Netherlands received a letter from the Interior Security Service, Rijksveiligheidsdienst. The letter asked people to make an appointment to have a relay installed in their home. The letter stated that this installation was necessary because of the new Intelligence and Security Services Act, which gives the secret services the power to intercept internet traffic in bulk.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Although the new law and bulk interception powers are very real, the Interior Security Service and the letter are not. Instead, they are part of a campaign by EDRi-member Bits of Freedom.

On 11 July 2017, the Dutch Senate passed the new Intelligence and Security Services Act. With the Senate vote, a years-long political battle came to an end: the secret services were given dragnet surveillance powers. While Bits of Freedom continued preparations to fight the law in court, six Dutch students called for a referendum. After the students collected first 10 000 and then over 400 000 signatures, the referendum commission announced on 1 November that a referendum about the Intelligence and Security Services Act would take place on 21 March 2018.

The 4 November letters were the first part of a series of campaigns by Bits of Freedom to inform citizens about the law and help them make an informed decision on 21 March. The letter people received directed them to a website that stated the letter was part of a campaign. On the website “Where do you draw the line?”, people are shown five short, animated videos that explain the most important and controversial parts of the new law: bulk interception, oversight, real-time access, exchange with foreign services, and the use of zero days. After each video people get to choose whether this particular part of the law is acceptable to them or not.

Within hours of the first responses to the letter appearing on social media, the Dutch secret service (AIVD) tweeted that the letter was not real. Local police forces warned citizens the letter was fake and in a few odd cases even called on people to destroy the letter. These messages only helped the campaign: people’s reactions were overwhelmingly positive and supportive!

Bits of Freedom continues to explore the possibilities of fighting the law in court, and will continue to campaign for a “no”-vote over the next couple of months. Furthermore, the organisation will do everything it can to support others in their campaigning efforts, starting with two information evenings for (potential) campaigners.

Dutch Senate votes in favour of dragnet surveillance powers (26.07.2017)

Bits of Freedom campaign website: “Where do you draw the line?” (only in Dutch)

(Contribution by Evelyn Austin, EDRi member Bits of Freedom, the Netherlands)



15 Nov 2017

Internet protocol community has a new tool to respect human rights

By Article 19

EDRi member Article 19 welcomes the Internet Research Task Force’s new “Research into Human Rights Considerations” as a much needed tool for the internet protocol community to respect human rights.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

This new document (RFC8280) marks the first milestone in a longer-term research effort to understand the impact of internet technologies on human rights. It has been reviewed by the Human Rights Protocol Considerations Research Group and by other external experts.

The research outlines the relationship between human rights, internet protocols and the internet architecture by analysing several cases. It reviews several protocols such as Domain Name System (DNS), The Hypertext Transfer Protocol (HTTP), Virtual Private Networks (VPN) among others for their positive and negative impacts on human rights.

The document offers a concrete set of guidelines for human rights considerations: questions and examples which protocol developers can use to ensure their protocols do not negatively impact human rights on the internet.

“Now the internet standards community has a clear set of guidelines to consider the impact of their work on human rights. This makes the internet a more rights respecting environment”, says Niels ten Oever, Head of Digital at Article 19 and one of the authors of the document.

This document is a product of the Human Rights Protocol Considerations Research Group. The group is also working on documents focusing on the right to freedom of association and protocols. If you are interested in this work, join the research group by subscribing to its mailing list: https://www.irtf.org/mailman/listinfo/hrpc

This article was originally published at https://www.article19.org/resources.php/resource/38939/en/internet-protocol-community-has-a-new-tool-to-respect-human-rights

Research into Human Rights Protocol Considerations

Freedom of Association on the Internet

On the Politics of Standards

(Contribution by EDRi member Article 19)



15 Nov 2017

School of Rock(ing) Copyright 2017: (Re-)united to #fixcopyright!

By Diego Naranjo

In September and October 2017, EDRi, Communia and Wikimedia co-organised a series of copyright-related workshops: School of Rock(ing) Copyright. The goal of the workshops was to engage local activists, researchers and associations interested in copyright to create new spaces of action at the national and European Union (EU) level.

The first School of Rock(ing) Copyright workshop was organised in Poland in November 2015. It was a success, and demand for similar opportunities to gather together activists working on copyright issues around Europe was obvious. This year the events took place in Slovenia, Hungary and Portugal, and were organised in collaboration with local partners: the Intellectual Property Institute in Slovenia, in Hungary, Center for Independent Journalism, and Direitos Digitais in Portugal.

Upload filters, exception for education, freedom of panorama

Many questions were raised about the ongoing EU copyright reform: Where will the proposed upload filter lead us? How to make sense of the exception for education? Will we have freedom of panorama  around the EU?

Apart from analysing the key aspects of the draft copyright Directive, we also explored general copyright topics, such as users rights exceptions and limitations, and Digital “Restriction” Management systems, as well as advocacy strategies. We discussed how copyright policies affects users in their daily lives, how Directives are implemented in each EU Member State, and what other copyright laws are already in place at the national level.

Image: School of Rock(ing) Copyright workshop in Lisbon

On top of that, we took special care to explain the entangled EU lawmaking process and how a citizen can impact that process. In order to do this, during our EU role playing game, participants played the EU Council, Members of the European Parliament (MEPs) or lobbyists/advocates and “passed” a new piece of copyright legislation. During the game, they experienced real situations like the need to agree on a text because of an approaching deadline, having to compromise with different political groups or with other States, and listening to all kind of stakeholders who want to influence the process in one way or another. The role play proved to be engaging, as – activists took their roles seriously and negotiated hard on behalf of the Member State they represented, or fought for monopolised internet as corporate lobbyists – and offered the participants an authentic and concrete “Brussels Maze” experience.

United to #fixcopyright!

Many were intrigued by the different steps of the legislation. The workshop gave them tools on how to get involved in the complicated decision making processes, and to make their voice heard in Brussels, Ljubljana, Budapest and Lisbon. During the one and a half days of workshops, one question was repeatedly asked: How can we help to fix copyright? We analysed different online campaigns to contact Members of the European Parliament (MEPs) while recalling the need to also contact national governments, since they are an essential part of the EU decision process. By analysing different steps of a campaign we defined the key points of organising successful campaigns and preparing the right campaign messaging.

Image: School of Rock(ing) Copyright participants in Ljubljana

The experiences in the School of Rock(ing) Copyright helped to create new connections in the countries where the workshops were organised. They will also help to fill in the gap between those Member States and Brussels. With these new voices, we expect to be able to shift the current unsustainable views of the Estonian Presidency and the ongoing discussions on the final text in the Committee on Legal Affairs (JURI) that leads the copyright reform in the European Parliament towards the direction that respects the rights and values of European citizens and creators.

The School of Rock(ing) EU Copyright 2017 (17.08.2017)

Copyright reform: Document pool

The Copyright Reform – a guide for the perplexed (02.11.2016)

Activist Guide to the Brussels Maze

(Contribution by Diego Naranjo, EDRi)



03 Nov 2017

New Estonian Presidency “compromise” creates copyright chaos

By Joe McNamee

Following the launch of the controversial proposed Copyright Directive in September 2016, the European Parliament and the Member States (gathered in the Council of the European Union) are now developing their positions.

The Council is working under its Estonian Presidency, which has produced a new “compromise” proposal.  After the Estonian Presidency of the Council proposed massive internet filtering as a “compromise” among the different views of the Member States a few weeks ago, the Estonians proposed yet another “compromise” on 30 October 2017. Regarding Article 13, the Estonian Presidency seems to have rewritten the extreme position of France, Spain and Portugal and simply called this a “compromise”.

This time their “compromise” is even more extreme than the previous one. For example, Article 13.1 of the compromise reads as follows (for context, it is worth reading the entire tortuous text proposed by the Estonians (13.1, 13.1a. 13.1.aa, 13.1.ab, 13.1b, 13.1c, 13.2, 13.2a, 13.3,13.4, 13.5 and recitals 37a to 37f):

“Member States shall provide that an information society service provider whose main or one of the main purposes [1] (sic) is to store and give access to the public to copyright protected works [2]
or other protected subject matter uploaded by its users is performing an act of communication to the public or an act making available to the public within the meaning of Article 3(1) and (2) [3] of Directive 2001/29/EC when it intervenes in full knowledge of the consequences [4] of its action to give the public access to those copyright protected works or other protected subject matter by organising these works [5] or other subject matter with the aim of obtaining profit from their use.”

[1] How many main purposes can a service have?
[2] Virtually all content that is uploaded to the internet can be copyright-protected.
[3] This means that the provider is directly using the content from the perspective of copyright law -even when it has no knowledge of it, which means that providers would have significantly more liability for copyright-protected works than for terrorist or child abuse content.
[4] What might “intervene” mean? “Full knowledge of” what “consequences” and for whom? None of the 115 words of this sentence mentions copyright infringements or “copyright-infringing work”.
[5] What does “organising these works” mean? The explanatory recital says that making information “easily findable” would be covered. Bearing in mind we are talking about the internet, what might “easily” or “findable” mean for any of the 27 or 28 judicial systems that will have to interpret this?

To understand how the Estonian “compromise” on Article 13 and the associated recitals (i.e. legally-binding explanatory notes) attacks fundamental rights and European businesses, we divide it into three issues: service provider liability (1), upload filtering (2) and possible redress (3).

1) Service provider liability & responsibility

In essence, building on the strategy of the European Commission, the Estonian text undermines the legal certainty of internet companies, practically forcing them to filter, block and delete any content that might create a risk.

According to the Estonian text, a company provides a “communication to the public” if it is hosting copyrighted works (i.e. virtually everything that can be uploaded to the internet) and if it intervenes in the content, for example, by presenting it in a certain manner, categorising it or making it “findable”.

The “compromise” would leave all legislation on service provider (non-)liability in force, but reinterprets it in a way that makes it virtually impossible to apply it in practice, as a whole industry would be subjected to contradictory primary (Copyright Directive) and secondary (E-Commerce Directive) responsibility regimes. For instance, the compromise invents the notion of hosting providers that build services based on the copyright status of the content that is uploaded (“whose main or one of the main purposes is to provide access to copyright protected content uploaded by their users are engaging into acts of communication to the public and making available to the public”). This can only mean that the proposal covers either a very small minority of hosting providers (those designed around the copyright status of the uploaded content) or almost all companies that provide a hosting service online (those that allow copyrightable content to be uploaded).

Furthermore, the Estonians add another criterion to establish if a service is “communicating to the public”, namely, if it intervenes in the full knowledge of unspecified “consequences” for unspecified third parties. As an example, this would cover companies that either index content, present it “in a certain manner”, categorise content or make it findable.

Fundamental rights? Forget them!

Having created huge incentives to block, delete and filter content, the text says that the defensive measures to be implemented by service providers should respect the fundamental rights to freedom of expression and information. The only minor problem is that this obligation is purely hypothetical – companies have the right to manage their platforms how they wish. The EU Charter and national constitutions are binding on states, not on companies. Worried that this might give citizens too many rights, the proposal is clear that it only covers companies described in Article 13.1a, namely those that provide “access to the public to a significant amount of works or other protected subject-matter uploaded by their users who do not hold the relevant rights in the content uploaded”. Companies covered by the all-encompassing Article 13.1 (quoted above) are explicitly not be covered by this illusory safeguard.

In an effort to create some form of balance, at least, licensing arrangements paid for by the service providers would cover uploads by users (assuming that it was possible for European companies to survive after investing in multiple filters and paying multiple licence fees to multiple rightsholder groups). However, this part of the text would not (for no obvious reason) cover uploads done by individuals “in their professional capacity”. So, uploading a copyrighted picture on Sunday evening as a private individual might be okay, but uploading the same picture on a professional blog would not be okay – even though the hosting provider would have paid for it to be communicated to the public. How would the hosting provider know the difference between two identical uploads? The text does not say what would/could/should happen in cases like this.

In addition, an effort at seeming balanced, the Estonian suggests that services in which content is “mainly” uploaded by the rightsholders themselves would not be covered by these obligations. The only small problem is that the service provider has no way of knowing if this is the case, or of knowing what “mainly” might mean – even with every corporate surveillance mechanism currently available. Or if it was covered by this description yesterday and if this is still the case today.

2) Upload filtering

Having destroyed legal certainty for even the smallest provider in article 13.1, the Estonian proposal then moves on to demand service providers to filter uploads in 13.1b. Companies would need to put in place a censorship machine if they give access to a “significant amount of copyrighted protected content”. They fall outside the all-encompassing definitions described in article 13.1 because (apparently regardless of the content being uploaded) such services “thereby compete in the online content services market”. It would come as a surprise to many that there is a single online content services market, rather than numerous audio, image, graphic, audiovisual and text-based markets. However, to be fair to the Estonians, they have, in rectial 38f, publicly admitted for the first time, that multiple different filters would need to be acquired, with multiple and changing levels of effectiveness and, therefore, different levels of proportionality.

The Estonian Presidency proposes that service providers outside the EU, even if they are covered by liability protections, should invest in upload filtering. Services like github, imgur and 9gag would have three options: a) laugh energetically, b) pay for multiple upload filters because the EU said so, but has no way of enforcing this outside its jurisdiction or 3) ban access to people connecting from the EU. In keeping with the clarity of the rest of the text, it is not clear what is meant by EU citizens “using” such services – in particular whether this covers browsing the services or just uploading to them.

The fact that this provision was included – presumably to avoid the legitimate criticism that this is an extraordinary act of self-harm against legitimate European businesses – says a lot. The fact that this totally (legally) unenforceable provision was included shows just how totally divorced from reality this debate has become. It is glaringly obvious that no foreign service provider would dream of respecting such an absurd foreign obligation, an obligation that is in breach of the case law of the Court of Justice of the European Union (CJEU).

One imagines that, in line with the Commission’s recent “Communication on Illegal Content Online”, the 27 EU Member States would see to impose the obligations by using “voluntary” attacks on those platforms by advertising networks, internet access providers and other online companies.

More worryingly, upload filtering will lead to huge amounts of legal content being restricted – which is a direct contravention of the Charter of Fundamental Rights of the EU. The Estonian hope appears to be that, as the restriction is imposed through coercion and not explicitly written in law, it is out of reach of the CJEU and Member States’ Constitutional courts.

3) Inadequate redress for incoherent law

Finally, the hosting services, fearful of liability and pushed into deleting legal content (including content eligible to benefit from copyright exceptions and limitations) will have to organise a redress mechanism. What could be good news, as nothing prevents hosting companies from content on the basis of their terms of service rather than the law in order to avoid this obligation. Hosting providers would implement this redress mechanism even though the decision on what to remove would be based on what they are told by rightsholders, not on their own assessment nor the assessment of the law. Furthermore, they are supposed to implement this redress mechanism, even though the Council legal service assumes that no personal information will be processed by the filter. If the legal service is correct, then the hosting company will not have any record of the filtering and could not implement the redress mechanism.

Next steps

It is expected that the Estonian Presidency will make a major push to adopt something resembling this text as the official position of the Council of the European Union between now and the end of 2017.

This will be followed by a vote in the European Parliament in January, to adopt the Parliament’s negotiating position.

There will then follow some months of negotiation between the Parliament and Council, before a final votes in both institutions, probably in July or September 2018.

Proposal for a Directive of the European Parliament and of the Council on
copyright in the Digital Single Market (30.10.2017)

Leak: Three EU countries join forces for restrictions & copyright chaos (26.10.2017)

Estonia loves digital – why is it supporting the #censorshipmachine? (07.09.2017)

Leaked document: EU Presidency calls for massive internet filtering (06.09.2017)

Deconstructing the Article 13 of the Copyright proposal of the European Commission, Revision 2


31 Oct 2017

The privacy movement and dissent: Protest

By Guest author

This is the fourth blogpost of a series, originally published by EDRi member Bits of Freedom, that explains how the activists of a Berlin-based privacy movement operate, organise, and express dissent. The series is inspired by a thesis by Loes Derks van de Ven, which describes the privacy movement as she encountered it from 2013 to 2015.*

In order to describe, analyse, and understand the ways in which the privacy movement uses protest, it is important to bear in mind the internet plays an all-encompassing role. First, we can distinguish between actions that are internet-supported and actions that are internet-based. Protests that are internet-supported are traditional means of protest that the internet has made easier to coordinate and organise, whereas protests that are internet-based could not have happened without the internet. Second, there is the height of the threshold for people to become involved. A high threshold means that participating entails a high risk and level of commitment, while a low threshold means a low risk and level of commitment. In the privacy movement, internet-supported protest with a low threshold and internet-based protest with a high threshold are the most common forms of protest.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

Internet-supported protest with a low threshold

The most common types of internet-supported protest with a low threshold that we find in the privacy movement are asking for donations and organising legal protest demonstrations.

The internet has given an impulse to donations: whereas in the analogue age the costs to coordinate such actions would outweigh the benefits, in the digital age collecting money has become much more accessible and easier. The Courage Foundation, for instance, collects donations for the legal defense of whistleblowers such as Edward Snowden and Lauri Love. Many other European organisations similarly offer their members and supporters the opportunity to make donations. However, it is worth noting that specifically in the case of the privacy movement, the threshold for donating money is higher than usual, as whistleblowing is a politically sensitive subject and community members have a heightened knowledge of privacy concerns associated with online payments. It is not surprising that donating via the anonymous digital currency Bitcoin is an option many organisations offer.

When it comes to demonstrations, the internet has also been an enhancing factor, as it has made the spreading and exchanging of information about the goal and practical details of a demonstration much easier. This also proves to be the case for demonstrations organised by the privacy movement. A fitting example of how the internet can help rapidly spread information and the effect that has on protest is the Netzpolitik demonstration held in Berlin on 1 August 2015. The announcement by Netzpolitik, a German organisation concerned with digital rights and culture, that two of their reporters and one source had been charged with treason, made thousands of people gather in the streets of Berlin to protest for the freedom of the press.

Here, too, it is worth considering how low the threshold for demonstrating actually is for activists within the privacy movement. In the analogue age it was difficult for governments to get a clear image of who exactly took part in a demonstration. Modern technology, however, has changed and continues to change the game. For instance, after participating in a protest, protesters in the Ukraine received a text message from their government that stated, “Dear Subscriber, you have been registered as a participant in a mass disturbance”. Something similar happened in Michigan, USA, in 2010. After a labour protest the local police asked for information about every cellphone that had been near the protest. Thus, the height of the risk that is involved in these sorts of protest is definitely worth reconsidering, especially when reflecting on a movement with so much awareness of (digital) surveillance.

Internet-based protest with a high threshold

Internet-based actions with a high threshold include protest websites, alternative media, culture jamming, and hacktivism.

Protest websites are websites that “promote social causes and chiefly mobilise support”. The privacy movement is involved in a number of these sorts of websites, for example edwardsnowden.com and chelseamanning.org, which are dedicated to whistleblowers and explain how supporters can help them, and savetheinternet.com, which asks supporters to take action in protecting net neutrality.

Alternative media have proven to be a crucial part of how the privacy movement voices dissent and “bears witness”, as the internet has made it possible to circumvent mass media and has reduced the effort to spread information to a large audience. A well-known example of alternative media, emerging from the privacy movement, is The Intercept, an online news organisation co-founded by Glenn Greenwald, Laura Poitras, and Jeremy Scahill. This newspaper aims, according to its website, to “[produce] fearless, adversarial journalism” and focuses on stories that provide transparency about government and corporate institutions’ behaviour.

Culture jamming is a form of protest where corporate identity and communications is appropriated for the protesters’ own goals, using tactics such as “billboard pirating, physical and virtual graffiti, website alteration, [and] spoof sites”. An example for spoof sites is the Twitter account: @NSA_PR, or NSA Public Relations in full, a reaction to the actual official Twitter account the public relations department of the US National Security Agency that was launched at the end of 2013. The spoof account often responds to recent surveillance and security issues in a humorous way. For example, when WikiLeaks published documents about the NSA’s interception of French leaders, NSA Public Relations posted, “Parlez-vous Français?”.

Hacktivism is the last form of internet-based protest with a high threshold. It is defined as “confrontational activities like DoS attacks via automated email floods, website defacements, or the use of malicious software like viruses and worms”. These activities are not commonly used within the privacy movement. Instead a “”digitally correct” form of hacktivism is practised. Digitally correct hacktivism designs computer programs that help confirm and accomplish their political aims. Of the many programs that exist, two of the most well-known and widely used programs for this kind of protest are the Tor Project web browser and Pretty Good Privacy. Both programs are designed to secure the user’s privacy. Whereas it is debatable whether direct action hacktivism is legal or not, the use of the Tor browser and email encryption are, of course.

The digital age has undeniably affected the way in which social movements protest. Traditional forms of protest have become internet-supported, but additionally there are also forms of protest being used that cannot even exist without the internet. This is even more the case for the privacy movement. For a movement that is so intertwined with the internet, we see that it is difficult to even make the distinction between online and offline protest, and that it comes up with its own specific alterations to already existing forms of protest.

The series was originally published by EDRi member Bits of Freedom at https://www.bof.nl/tag/meeting-the-privacy-movement/

Dissent in the privacy movement: whistleblowing, art and protest (12.07.2017)

The privacy movement and dissent: Whistleblowing (23.08.2017)

The privacy movement and dissent: Art (04.10.2017)

(Contribution by Loes Derks van de Ven; Adaptation by Maren Schmid, EDRi intern)

* This research was finalised in 2015 and does not take into account the changes within the movement that have occurred since then.

Della Porta, Donatella, and Mario Diani. Social movements. An Introduction. Malden: Blackwell Publishing, 2006.
Van Aelst, Peter, and Jeroen van Laer. “Internet and Social Movement Action Repertoires. Opportunities and Limitations.” Information, Communication & Society 13:8 (2010): 1146-1171.