27 Apr 2017

AVMS Directive: It isn’t censorship if the content is mostly legal, right?


AVMSD – What is it?

The Audiovisual Media Services Directive (AVMSD) was originally designed for satellite TV, where broadcasters are a) in full editorial control and b) content is actively transmitted to viewers. It was subsequently extended to “on-demand” services, where providers a) make an active choice to decide what is made available, but b) where viewers choose what to watch. The plan is now to extend it to video-sharing and (some) social media platforms, where there is a) no editorial control and b) where viewers choose what to watch. In other words, there is almost no similarity between the original purpose and what is now being done. In many ways, this is like regulating a Porsche using legislation designed for regulating a donkey cart.

What about the E-Commerce Directive on service provider liability?

In both the Council of the European Union and the European Parliament, there has been a lot of discussion about whether the AVMSD undermines the E-Commerce Directive, adopted in 2000. That Directive protects freedom of expression by ensuring that internet companies are not unduly incentivised to delete content. It does so by limiting liability to situations where they fail to act diligently upon receipt of a notice of illegality of the content in question.

The Council and the Parliament want a wide variety of content to be regulated – anything that (based on the wisdom of the provider, in the first instance) might impact the physical, mental and moral development of minors. At the same time, video-sharing and (some) social media platforms are expected to restrict content that is an “incitement to violence or hatred” by reference, for example, to sex, racial or ethnic origin, disability, age, or sexual orientation.

The content that the providers will be required to regulate is not, or not necessarily, illegal. As a result, it is argued that this privatised regulation of freedom of expression does not breach the E-Commerce Directive, because the obligation is to regulate content. In short, restriction of legal content is not a breach of rules that cover illegal content.

So, how will video-sharing platforms do all of this?

One of the options is for states to regulate freedom of expression by regulating the terms of service of the social media companies and video-sharing platforms. This will allow content to be deleted without ever referring to the law. This fits with other EU instruments, such as the Europol Regulation, which allows police authorities to coerce companies into deleting online content. The Europol Regulation creates the task of “making of referrals of internet content, by which such forms of crime are facilitated, promoted or committed, to the online service providers concerned for their voluntary consideration of the compatibility of the referred internet content with their own terms and conditions.” It does not, however, fit so well with the Charter of Fundamental Rights and the European Convention on Human Rights, both of which require restrictions on fundamental rights to be provided for by clear, predictable law.

Craziest proposal – European Parliament

The craziest part of the Parliament’s proposal is probably importing, ironically from the Charter of Fundamental Rights of the European Union, the list of types of discrimination that the EU Member States are prohibited from imposing. These prohibited types of discrimination then become the list of types of “incitement to hatred” that social media companies should protect us from with their terms of service. So, video-sharing platforms would have to protect people from “incitement to hatred” as a result of “other opinions”. The list makes complete sense in the Charter of Fundamental Rights, and no sense at all in the Directive that regulates audiovisual media services.

Craziest proposal – Council of the European Union

Remarkably, the Council text proposes that video-sharing and social media platforms should regulate live-streamed video. The Council also proposes banning of content that is already banned by the Terrorism Directive. The Council’s position before this week’s discussions was leaked by EDRi member Statewatch and is available here.

This is nuts! Are there no voices of sanity?

Yes, just not enough, so far. Seven European Union Member States have expressed serious concerns regarding the proposals to further extend the scope of the AVMSD. They did so in an unpublished joint “non-paper” sent to the EU Council Presidency. The UK has made its reservations known separately. Those seven Member States (Czech Republic, Denmark, Finland, Ireland, Luxembourg, the Netherlands, and Sweden) pointed out the obvious problems of requiring video-sharing platforms to “police” non-illegal content over which they do not have editorial control.

The “non-paper” diplomatically but meaningfully points to the absurdity of the proposal to expand the scope of the Directive to services, which could not “reasonably be expected by an end-user to be regulated similarly to audiovisual media services”, such as animated GIFs.

Some of the smaller political groups in the Parliament have been working astonishingly hard to try to achieve even small improvements in the text. Ironically, while the AVMS Directive represents much of what is worst in EU policy-making, the huge efforts made by some politicians behind the scenes on this file represent some of the finest, selfless, thankless work from EU parliamentarians.

AVMS Directive – censorship by coercive comedy confusion

Audiovisual Media Services Directive – is it good enough to be a law?

Revision of the Audiovisual Media Services Directive (AVMSD), 2016 proposal

Europol Regulation

Council text of 24 April


25 Apr 2017

European Parliament Culture Committee takes strong position against upload filtering


Today, 25 April 2017, the European Parliament Committee on Culture and Education (CULT) voted on the draft Audiovisual Media Services Directive (AVMSD). In a surprise move, the Committee voted to prohibit filtering of uploads by video-sharing platforms. This position, adopted by a majority of 17 to 9, will be the position of the Parliament in its upcoming negotiations with the EU Council, which aim to finalise the text.

A vote opposing upload filtering sends a strong signal, ahead of negotiations on the Copyright reform

said Joe McNamee, Executive Director of European Digital Rights. The European Commission proposes mandatory upload filtering in its draft Copyright Directive. “Now that the CULT Committee has wisely taken a position against mandatory filtering, which is a dangerous tool in the fight against incitement to hatred and violence, it would be absurd if they supported upload filtering for copyright reasons,” McNamee continued.

The Committee clearly tried, in months of compromise negotiations, to find a common ground between expanding the policing obligations of video-sharing and social media platforms and the protection of citizens’ fundamental rights. Unfortunately, the agreed text is far from perfect, so EDRi will keep working with the EU institutions, in the next stages of the process, to order to maximise protection for fundamental rights.

As amended by the CULT Committee, the AVMSD proposes that internet video-sharing platforms should take measures to protect children from (legal) content that could “impair their physical, mental or moral development”. This is extremely broad and dangerous. Internet video-sharing platforms would also be required to protect the general public from “incitement undermining human dignity”, incitement to terrorism, violence and hatred defined by reference to, among other traits and features, “political or any other opinion”. Requiring companies to, for example, restrict how we express ourselves online to protect society from “incitement to hatred” on the basis of “any other opinion” falls below minimum standards of legal predictability required by the EU Charter of Fundamental Rights.

It might sound like a good idea to protect people from bad things. However, nobody actually knows what the video-sharing platforms are meant to be protecting us from, whether such measures would be counterproductive or not, or the scale of the problems that they are supposed to be fixing. It is unclear why the AVMSD included these measures, as there are other legal instruments that deal with the same issues, such as the Child Exploitation Directive, the Terrorism Directive, and the Europol Regulation. It is also unclear whether the proposed measures would actually protect anyone. What is known is that such measures pose a threat to our freedom of expression, by encouraging video-sharing platforms and social media companies to delete perfectly legal material.

Read more:

AVMS Directive – censorship by coercive comedy confusion (19.04.2017)

German Social Media law – sharp criticism from leading legal expert (19.04.2017)

EDRi position paper on the proposed revision of the Audio-Visual Media Services Directive

EDRi’s proposals for amendments to the AVMSD (13.07.2016)


19 Apr 2017

AVMS Directive – censorship by coercive comedy confusion

By Joe McNamee

On 25 April 2017, the European Parliament Committee on Culture and Education (CULT) will vote on its report on the European Commission’s proposal on Audiovisual Media Services Directive (AVMSD).

To understand just how confused the proposal is, it is worth understanding its history. In 1989, the EU adopted the “Television without Frontiers” Directive, to regulate cross-border satellite TV, covering issues such as jurisdiction and protection of minors. This Directive was out of date very quickly, leading to a revision that was adopted in 1997. That, in turn, was quickly out of date and revised in 2007. Then, in 2010, the EU adopted its fourth revision, this time trying to fit video on demand (VOD) services, such as Netflix, HBO Go, Amazon Video and others, into this legislation. In 2016, the European Commission proposed yet another revision, this time trying to squeeze yet another type of service – video-sharing platforms – into regulation designed in the mid-eighties for satellite TV.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

The current proposal, which proposes even more obligations on video-sharing platforms, is horribly contradictory and unclear. It does contain, however, a reasonable amount of comedy, which is an innovation for the EU institutions. For example, this legislation on “audiovisual” content covers, on the basis of Parliament compromise amendments, “a set of moving images”, which would cover, for example, an animated GIF.

Furthermore, it doesn’t cover all online video-sharing. For example, it does not cover video sections of news sites that are “indissociably complementary” to the site (borrowing wording from a Court of Justice of the European Union (CJEU) ruling in the New Media Online case). This means that video contents, featured on a news website, should only be regulated according to the Directive if they are not complementary to the journalistic activity of that publisher and are independent of written press articles on the site.

In a further (failed) effort to add to legal certainty, the Parliament’s draft compromise text also seeks to clarify the notion of “user-generated content” by removing from the Commission’s proposal the notion that it has to be user-generated. If the compromise text is adopted, the new definition of “user-generated” video would be “a set of moving images with or without sound constituting an individual item that is uploaded to a video-sharing platform”. This means that to be a “user-generated video”, it would not need to be user-generated nor, indeed, would it need to be a video.

On a more serious note, the proposal requires badly defined video-sharing platforms to take measures to protect children from content that would harm their “physical, mental or moral development” (“moral” added by the Parliament to various new parts of the Directive). This involves measures to restrict (undefined) legal content. The European Commission proposed also that the companies should enforce the law on incitement to racism and xenophobia. The Parliament’s suggestion is to extend law enforcement to areas where there is no law – such as incitement to hatred of “a person or group of persons defined by reference to nationality, sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion”. The Parliament also proposes setting up dispute resolution systems to verify decisions about which videos should stay online or not after accusations that they might lead to hatred of a person due to, for example, “any other opinion”. Video-sharing platforms will also need to make sure that video uploaders “declare” whether or not their videos contain advertisements, product placement or sponsored content.

It is clear that the broad restrictions of legal and illegal content that video-sharing platforms are meant to impose will lead to significant levels of removal of legal content, particularly due to the spectacularly unclear scope of their obligations. Restrictions on freedom of communication must, under the Charter of Fundamental Rights of the European Union just be “provided for by law” and necessary and genuinely meet objectives of general interest. The Commission’s text failed to achieve this minimum standard, while the draft compromise amendments to be voted on 25 April by the Parliament fall very far short of this standard. The only possible result of the legal chaos that this will create for video-sharing platforms is the deletion of a large amount of legal content, in order to minimise their exposure to possible state sanctions or other litigation.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Television broadcasting activities: “Television without Frontiers” (TVWF) Directive – Summaries of EU legislation

Audiovisual Media Services Directive (2010/13/EU)

Revision of the Audiovisual Media Services Directive (AVMSD), 2016 proposal

(Contribution by Joe McNamee, EDRi)



19 Apr 2017

Dangerous myths peddled about data subject access rights

By Guest author

Now that the date on which the General Data Protection Regulation (GDPR) becomes enforceable is rapidly approaching, the European Data Protection Authorities (DPAs) are in the process of clarifying what their shared positions will be on various topics, including profiling. This is done through stakeholder consultation meetings.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

During the latest meeting, one of the more contentious issues surrounding profiling turned out to be the transparency requirements regarding the algorithms used for automated decision making and profiling. While industry representatives in general provided constructive input on the various topics, this issue was more challenging. Several industry representatives were pushing for a very narrow interpretation of the right to access regarding the logic in automated decision making.

The basic argument is that industry has a right to hide the precise details of the calculations used to make decisions that discriminate against individuals. Three points were made in support of claims that the right of information regarding the logic of processing should not extend to disclosing the actual algorithms used:

  1. they would be protected trade secrets;
  2. intellectual property rights would preclude such disclosure;
  3. it would create a moral hazard in case of applications of profiling in fraud prevention.

Regarding the protection of trade secrets, the situation is fairly simple. The Trade Secrets Directive (2016/943/EU), for all its flaws, mentions specifically in its recitals that it shall not affect, among other rights, the right to access for data subjects. Since this Directive has to be implemented by June 2018, there is only a window of a few weeks in which trade secrets protections in some member states could, theoretically, prejudice data subject access to the logic used in automated decision making. So for all practical intents and purposes, trade secret legislation cannot be invoked to prevent disclosure of such underlying algorithms.

As far as intellectual property rights are involved, this is even more of a non-issue. The only so-called intellectual property rights that bear relevance here are copyright law and patent law.

Software copyright law does not explicitly cover underlying algorithms, a view that is reiterated in the ruling of the SAS Institute Inc. v World Programming Ltd case (C‑406/10 CJEU), in which the Court of Justice of the European Union (CJEU) ruled that the functionality of a computer program is not protected by copyright under Computer Programs Directive (91/250/EEC).

As far as patent law is involved, the European Patent Convention states that “schemes, rules and methods for performing mental acts, playing games or doing business, and programs for computers” shall not be regarded as patentable inventions (article 52(2) sub c). It would be difficult to argue that the logic for automated decision making in profiling of personal data is not a method for doing business. A requirement for patent law protection is disclosure of the underlying technology, which makes it even less likely to support an argument that it might prejudice disclosure of logic in automated decision making. Given that none of the other intellectual property rights even come close to covering the logic of algorithms, it follows that there are no barriers in intellectual property laws to disclosure of logic for automated decision making.

Even if there were intellectual property rights covering the underlying logic of software algorithms, it would still not necessarily be a given that these should override the data protection legislation. The CJEU has repeatedly considered competition law interests outweighing intellectual property interests in cases where it had to balance competition against intellectual property.

The last argument, that of a moral hazard, may or may not come into play in the context of fraud detection and insurance risk assessment. First of all, the European legislator has never made any exceptions for it in the GDPR, secondly, this can be addressed by disclosure of the logic as applied to a specific data subject instead of disclosure of the general logic as applied to all data subjects affected.

The logical conclusion for DPAs enforcing the GDPR in the future is to interpret the aforementioned arguments from parts of industry with a great deal of cynicism. They simply have no basis in the EU law and/or reality.

Rejections of data subject access requests to the underlying logic of automated decision making based on “trade secrets” or “intellectual property rights” should be treated by DPAs as violations of the GDPR and addressed accordingly.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

The Trade Secrets Directive (2016/943/EU)

Ruling of the SAS Institute Inc. v World Programming Ltd case

European Patent Convention

Insurance: How a simple query could cost you a premium penalty (30.09.2013)

(Contribution by Walter van Holst, EDRi member Vrijschrift, the Netherlands)



19 Apr 2017

Data mining for profit and election result – how predictable are we?

By Guest author

Did Donald Trump become president because he hired the data mining firm Cambridge Analytica, which uses profiling and micro-targeting in political elections? Some say yes, many say no. But what we know is that we are subjected to extensive personalised commercial and political messaging on the basis of data, including metadata, collected and used without our awareness and consent. It can result in changes in our behaviour, at least to some extent.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

As much as we would like to think we are able to make decisions that are impossible to predict, we are creatures of habit, with our routines and patterns, and submerged in the filter bubbles of the like-minded. In short – it is fairly easy to learn about our activities, preferences, habits, relationships, just by getting a glimpse of our digital footprint, be it our browsing history, our social network, or our location data. This comes very handy when marketing companies, data brokers or campaign strategists try to understand and predict our shopping preferences or our vote in the next elections. Our data is turned into profit and power.

Data mining refers to seeking useful insights from collected data. In other words, it’s a method to examine existing large data sets to generate new information. Profiling on the basis of data mining is problematic from at least two perspectives.

The first is that our public data are used to learn additional personal and intimate information about us, which we are not keen on disclosing to the world or about whose existence we know nothing. It is for example easily possible to learn about a person’s sexual orientation, even if he or she does not disclose this information publicly.

The second point is the fact that one can never get a full image of a person on the basis of these data, so the predictions can never be fully accurate – from a business perspective, they only need to be accurate enough to be profitable. This might seem rather harmless; you might only be slightly upset when you are being targeted with advertisement for a romantic getaway when you just broke up with your partner. But inaccurate profiling can easily turn into a nightmare. Even when relying heavily on algorithms, the processes of data mining are still subverted to human subjectivity, and there is a potential for all sorts of biases, reflecting human prejudice. Decisions based on prejudicial profiles can have real-life consequences, such as welfare, employment, credit, even education.

It may not be possible to single-handily manipulate the outcome of an election through data mining. However, it raises serious concerns for privacy and democracy. Profiling on the basis of information about our interests, personalities, activities and affiliations can service political and commercial marketing, which aims to change people’s behaviour by exposing their vulnerable spots to manipulation. This is why it is crucial to address profiling in the legislation.

The regulation intended to strengthen data protection within the EU, General Data Protection Regulation (GDPR) was adopted in April 2016, and applies from May 2018. It gives citizens more rights to information and to object, and contains more explicit requirements for consent than existing legislation. A proposal for an e-Privacy Regulation (ePR) to complement the GDPR was published in January 2017. It seeks to add more clarity and legal certainty for individuals and businesses by providing specific rules related to our freedoms in the online environment.

Data protection is about privacy, security, autonomy and, ultimately about how our society functions.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Cambridge Analytica Explained: Data and Elections (13 April 2017)

Everything you need to know about the Data Protection Regulation

New e-Privacy rules need improvements to help build trust (9 March 2017)

(Contribution Zarja Protner, EDRi intern)



19 Apr 2017

German Social Media law – sharp criticism from leading legal expert

By Joe McNamee

Professor Wolfgang Schulz, one of Europe’s preeminent legal experts, has prepared a short critique of Germany’s so-called “Act improving Law Enforcement on Social Networks”, also known under the abbreviation NetzDG.

Professor Schulz criticises the fact that the draft law covers a range of different types of offences, making it difficult to assess its necessity as a means of restricting freedom of speech. More damningly, he points to the key assumptions on which the law is based, arguing that they have been abandoned “for a long time”. Furthermore, he argues that “there are many effective ways of addressing fake news or hateful speech” that should be [implicitly, were not] taken into account to minimise potential negative effects on freedom of speech”.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

EDRi’s suggested amendment to recital 31 of the Audio-Visual Media Services Directive, adopted by the European Parliament’s Civil Liberties Committee, raises concerns about the “balance of incentives” for internet companies. In line with the amendment, Professor Schulz points to the negative consequences of the German law for the “incentive structure” for social media companies. He argues that the draft law “strengthens this incentive structure further at the expense of freedom of speech”. In short, the incentives to remove information are increased, while incentives to leave information online have been reduced.

Professor Schulz also points out that the aim is not the criminal prosecution of offenders, but that the law “rather creates duties and provisions for administrative offences for platform providers”. The solitary provision that does focus on offenders is too broad and should be “restricted to particularly grave infringements upon rights only”. He also raises very clear arguments regarding the constitutionality of the proposal. His argument is that, as the focus is on regulation of content and not criminal prosecution, there is no specific federal power provided for in Germany’s “basic law” (Grundgesetz), which devolves content regulation to the regional governments (Länder).

The analysis of scope is particularly depressing. The term “provider” has already created problems in the German Telemedia Act, the term “user” is unclear regarding the applicability of the act, and business networks have been excluded in a way that risks “favouring domestic companies”. While freedom of speech would normally have to be considered when dealing with content of this nature, “the draft does not allow for that”.

The document also points to the weak approach to reporting in the draft. In particular, the obligation to report on take-down performance “creates even more incentives for the provider to perform a take-down on request without checking to avoid any self-blaming and -shaming in the report”.

With the lone exception of child abuse material (“child pornography”), removal of content requires a context-sensitive assessment. If a law is likely to have the effect of removing legal content, this is a restriction on freedom of speech, as laid down in the basic law, the European Convention on Human Rights and elsewhere. The provision requiring removal of “obviously illegal” content within 24 hours from the request creates an environment for the provider where it is safer to remove content quickly, if there is any doubt. The seven-day deadline for non-obviously illegal content raises similar concerns. Finally, the rather unclear rules on re-uploads, which remove a possibility to assess context, mean that automatic takedowns will result.

Professor Schulz points to references in the draft law to the E-Commerce Directive to show that the Charter of Fundamental Rights of the European Union is applicable. If the German law refers to an EU Directive, it needs to comply with the Charter. Professor Schulz also points out that the proposed measures appear to be in breach of Article 3.4 of the E-Commerce Directive.

Finally, the document details the unpredictable nature of the fines that could be imposed. The fines would be imposed by the Federal Office of Justice. There is a broad margin of appreciation as to whether a fine should be imposed or not. In case of inaction, if politically-motivated, there is limited court oversight. Even when courts are involved, the person whose content is concerned is not involved in the court proceedings.

Comments on the Draft for an Act improving Law Enforcement on Social Networks (NetzDG)

Reckless social media law threatens freedom of expression in Germany (05.40.2017)

LIBE Opinion on amendments to Audio-Visual Media Services Directive

(Contribution by Joe McNamee, EDRi)



19 Apr 2017

Challenges for “Legal Frameworks for Hacking by Law Enforcement”

By Guest author

A study entitled “Legal Frameworks for Hacking by Law Enforcement: Identification, Evaluation and Comparison of Practices” was published by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the Committee on Civil Liberties Justice and Home Affairs (LIBE). It presents policy proposals on the use of hacking techniques by law enforcement authorities. Based on the the maturity of the legal framework, public debate and practices, the proposals rely on a thorough comparative examination of the legal frameworks for hacking by Law Enforcement Agencies (LEA) across six EU Member States (France, Germany, Italy, the Netherlands, Poland and the UK) and three non-EU countries (Australia, Israel and the US). Even though the primary rationale behind the study is the international and EU-level debates on the issue of ‘going dark’ (i.e. the decreasing ability of LEA to access and examine evidence due to encryption), it builds its proposal based on failures of other alternatives such as backdoors and zero-day exploits.

The study examines the legal and practical balances and safeguards implemented at national level to ensure the legality, necessity and proportionality of restrictions to the fundamental right to privacy, the security of the internet, and to a lesser extent, the regulation of the sale of hacking tools. Based on these factors, the study highlights several key risk factors imposed by the use of hacking techniques by law enforcement:

  • Hacking techniques are extremely invasive, particularly when compared with traditional intrusive investigative tools (such as wiretapping and house searches), and this imposes a very high degree of risk to the fundamental right to privacy and freedom of expression and information without appropriate policies in place.
  • Use of hacking techniques has the potential to significantly weaken the security of the internet by “increasing the attack surface for malicious abuse”, with possible damage far beyond the intended target.
  • Given the global nature of the Internet, LEA (and service providers) may not know the physical location of the target data – this has resulted in the concept of “loss of knowledge of location”. In many such cases, the LEA may remotely access the data located in the jurisdiction of another country, which poses serious risks to territorial sovereignty. Most of the time, LEA breach jurisdictional boundaries unknowingly due to confusing nature of the internet infrastructure and lack of concrete procedures for mutual legal assistance in cross-border investigations.
  • In the recent past, many civil society organisations (including EDRi members) have questioned the current dual-use export control regimes.

The study further compares the provisions for legal frameworks and their context by evaluating the technical means of hacking and the fundamental rights considerations in order to derive both benefits and risks of the use of hacking techniques by law enforcement. It is found that all the EU Member States examined for the study supplement the common types of ex-ante and ex-post conditions with different, less common, conditions. Some of the key ex-ante considerations include:

  • judicial authorisation for law enforcement hacking;
  • restriction on the use of hacking tools based on the gravity of crimes which are limited either by a list of crimes for which hacking is permitted, or they are limited by the maximum custodial sentence of greater than a certain number of years, along with the restriction on the duration for which hacking may be used.

Some of the key ex-post considerations include:

  • provision for the notification of targets of hacking practices and remedy in cases of unlawful hacking; and
  • report through logging hacking activities for review and to identify the oversight mechanisms.

The study highlights some of the criticisms in each country’s legal provisions for hacking, for example, the lack of knowledge amongst the judiciary in France, Germany, Italy and the Netherlands; unclear definition of devices that can be targeted in the Netherlands; and the inefficient process for screening and deleting non-relevant data (in Germany). It also underlines some of the good aspects of the provisions, such as the 2017 Italian draft law’s efforts to protect against the overuse or abuse of a hacking tool’s extensive capabilities by separating the functionalities of the tools, and Dutch Computer Crime III Bill’s mandates on the need to conduct a formal proportionality assessment for each hacking request, with strict rules on the authorisation and expertise of the investigation officers that can perform hacking.

Based on the above analysis, the study derives twelve actionable policy proposals and recommendations. The proposals highlight the fact that the European Parliament should pass a resolution calling on the Member States to conduct a Privacy Impact Assessment when new laws are proposed to permit and govern the use of hacking techniques by LEA with clear and precise legal basis. The Parliament should support efforts to evaluate and monitor lawful hacking activities; support efforts to develop appropriate responses to handling zero-day vulnerabilities, and finally it should reaffirm its commitment to strong encryption considering both fundamental rights of EU citizens and the internet security. Furthermore, the policy proposals emphasise the impact analysed by the EU Agency for Fundamental Rights (FRA) research on fundamental rights protection in the context of surveillance in response to the Snowden revelations, and recommends to produce a similar brief related to the legal frameworks governing the use of hacking techniques by LEA across all the EU Member States. Also, it strongly proposes the collaboration of FRA, CEPOL and Eurojust to provide training to all stakeholders who would potentially be involved in the hacking activities.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

Legal Frameworks for Hacking by Law Enforcement: Identification, Evaluation and Comparison of Practices

Rights groups demand action on export controls (06.03.2017)

(Contribution by Siddharth Rao, Ford-Mozilla Open Web Fellow, EDRi)



19 Apr 2017

Member Spotlight: SHARE Foundation

By Guest author

This is the fifth article of the series “EDRi member in the Spotlight” in which our members have the opportunity to introduce themselves and their work in depth.

Today we introduce our Serbian member SHARE Foundation.


1. Who are you and what is your organisation’s goal and mission?

We are SHARE Foundation from Serbia, a non-profit organisation dedicated to protecting human rights and freedoms in the digital environment and promoting positive values of openness, decentralisation, free access to information, technology and knowledge.

2. How did it all begin, and how did your organisation develop its work?

After a series of huge SHARE conferences held in Belgrade and Beirut, each with more than 1000 participants, we decided that our activism and advocacy efforts needed to be established on an organisational level. In 2012, SHARE Foundation was born.

3. The biggest opportunity created by advancements in information and communication technology is…

Access to knowledge, free flow of information, and of course, cat memes!

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

4. The biggest threat created by advancements in information and communication technology is…

Mass surveillance of people’s lives and their monetisation by mega-corporations which use non-transparent algorithms.

5. Which are the biggest victories/successes/achievements of your organisation?

In late 2014, Government of Serbia proposed amendments to the Law on Games of Chance, which would introduce blocking and filtering of internet in Serbia. After a joint opposing effort to such measures from civil society, led by SHARE Foundation, the proposed provisions were dropped from the legislative procedure.

During the past years, SHARE Foundation has organised more than 20 events, published more than ten info-guides and numerous SHARE Lab investigative stories. We have also provided free legal and technical assistance to many online news portals and civil society media platforms.

In January 2017, SHARE Foundation received a certificate of gratitude.

6. If your organisation could now change one thing in your country, what would that be?

We would reduce the number of political trolls and bots on the internet – which is currently too damn high!

7. What is the biggest challenge your organisation is currently facing in your country?

The outdated Law on Protection of Personal Data. The process of reforming the law is taking almost a year and a half now, and we still have no information on when a new version of the draft law will be published.

8. How can one get in touch with you if they want to help as a volunteer, or donate to support your work?

You can reach us via Twitter and Facebook or visit our website shareconference.net, where you can find more info about our work. If you prefer email, you can contact us at info(at)sharedefense(dot)org.

SHARE Foundation

SHARE Lab investigative stories

SHARE Twitter

(Contribution by EDRi member SHARE Foundation, Serbia)



10 Apr 2017

RightsCon session on cross-border access to e-evidence – key interventions

By Joe McNamee

European Digital Rights organised a session at the RightsCon conference in Brussels on 31 March 2017, in order to build awareness among stakeholders about the multiple international developments on law enforcement access to electronic evidence.

The bulk of the discussions focussed on a possible new protocol to the Cybercrime (Budapest) Convention of the Council of Europe (CoE). The CoE initiative is far broader than the Council of Europe area, covering all 53 countries that have ratified the Cybercrime Convention (including the USA, Australia, Canada and others). It is not necessary for countries to have ratified the data protection or human rights Conventions of the CoE before ratifying the Cybercrime Convention. Some of the issues surrounding the instrument are assessed in an “issue paper” by Professor Douwe Korff on the “Rule of Law on the Internet and in the Wider Digital World” prepared for the Council of Europe Human Rights Commissioner (with input from EDRi).

To ensure accuracy and balance, speakers were given the opportunity to edit the draft summary of their own interventions. The text below is, therefore, not a perfect record of what was actually said.

The key speakers were Alexander Seger, head of the Cybercrime Division of the Council of Europe, Lani Cossette from Microsoft, Owen Bennett from the European Internet Services Providers Association (EuroISPA) and Javier Ruiz from EDRi member Open Rights Group (ORG).

Alexander Seger opened by arguing that cross-border is a rather fictional concept on the internet. He said that the Cybercrime Convention is a criminal justice treaty, so data access is about access to specific data in specific criminal investigations, and not about bulk data collection or national security measures. All measures fall under criminal law, which is where countries have the strongest safeguards. In addition, the starting point in any such discussions is that, as detailed in the European Convention on Human Rights (ECHR) [note: the Convention is open to countries that are not a party to the ECHR], there is a positive obligation on states to protect citizens from crime.

He pointed out that if there are a hundred cybercrimes reported, then cases that actually lead to a court judgement might be 0,1% or even less. This raises questions as to whether this positive obligation is being met. He mentioned that, in other meetings, he asked participants to think of three types of crime where evidence would not be, to some extent, on a computer system, but few examples could be found. In reality, there is almost always some evidence on a computer system. However, access to such evidence is extremely complicated.

There are areas the Cybercrime Committee of the Council of Europe have identified where action is possible, in particular regarding how to deal with evidence in the “cloud” (i.e. servers equipment in foreign, multiple, shifting or unknown jurisdictions). The Committee looked at the issues for two and a half years and produced five recommendations.

Four of the five recommendations have been followed and a work on a fifth will be subject to a decision in June 2017.

  1. Mutual Legal Assistance Treaty (MLAT) arrangements to be made more efficient. MLA remains the most important tool to obtain evidence from foreign jurisdictions. This is not an effort to get around existing procedures. Examples include a way to find a “light” system for getting access to basic “subscriber information” or to deal with emergency situations. The United States legal system has such options, but many national legal systems do not. For example, MLATs in South America often go through foreign ministries rather than through criminal justice systems, which is complicated and causing delays.
  2. Guidance Note on Article 18 (Production orders for subscriber information) to be produced. This is about subscriber information, not content or traffic data. This is the type of data that is needed most often. Big US companies receive thousands of requests per year directly from law enforcement authorities (LEAs) abroad. Article 18a is about production orders in a given jurisdiction. The Guidance Note says that it does not matter where the data is stored, the decisive question is who is in possession or control. This is the same as when banks repatriate data to deal with a fraud in a country. Article 18.1.b covers situations where a service provider based abroad is providing a service to users in another country. However, there is lack of clarity regarding how to service production orders to companies in such situations and no enforcement mechanism if the provider fails to provide the requested data.
  3. Governments to fully implement Article 18. Procedures on this point need to be precise rather than the broad powers that are often used at the moment. Such rules need to be clearly defined in national law to meet rule of law requirements.
  4. Development of practical measures for cooperation with providers and making available of an online tool, to facilitate procedures. That will allow providers to understand the domestic law of the country making the request and respecting these legal requirements. It will also allow for more understanding from requesting authorities of companies’ procedures.
  5. Decision to be made on the possible drafting of an additional protocol on access to evidence in the cloud in June 2017. The draft plan is to take about 2.5 years to reach agreement on a draft protocol.

Current plans focus on four key topics:

  1. Additional possibilities for mutual legal assistance, including emergency and light procedures.
  2. Transborder access to data. Some countries may already get access to data, but such access is often on a shaky legal framework. Access under the Budapest Convention at the moment is under very narrow conditions and this was confirmed in Guidance Note on Article 32. Are there additional options – if the persons are in the jurisdiction and the crime is in the jurisdiction, what protections are needed? Work on this issue was suspended, as moving forward in the aftermath of the Snowden revelations. Work to avoid a situation where states unilaterally develop their own solutions, thereby creating a jungle.
  3. Direct cooperation with providers in other jurisdictions. Can we do more in that environment?
  4. Data protection and rule of law safeguards. The more innovation that is proposed, the more safeguards will be needed.

Data protection organisations, civil society and industry will be consulted in the process.

For context, six major providers in the US directly received 138 000 requests from Parties to the Budapest Convention other than the US.

Lani Cossette, representing Microsoft stressed that the company complies with legal obligations and does not volunteer access to data. Microsoft prefers obligation rather than cooperation. She said that there is confusion at the moment as regards what law applies in what country.

The current legal framework was written when “the cloud,” as we know it, did not exist. Things were fairly simple in the 1980s. Emails were stored on local servers. Later, data centres were built in the US. More recently, more and more data centres are being built in Europe, opening up new questions regarding whose law applies in various scenarios.  Jurisdiction is traditionally rooted in territoriality, so jurisdiction over digital evidence has been challenging to sort out because data does not always sit in one territory. Microsoft has data centres in Ireland, which serves users and customers in Europe, which means that there are legal conflicts even within Europe, not just between the US and Europe.

Microsoft has participated in the consultation of the Commission task force on e-evidence.  The Commission was asked:

  • could procedures be improved;
  • could MLA procedures be improved to unburden the system; and
  • is legislation needed with regard to enforcement jurisdiction?

The Commission’s current work cycle on this issue started in July 2016, with stakeholder meetings. The consultation process includes civil society.

In June 2016, the Council of the EU (the institution representing EU Member State governments in the EU decision-making process) produced a document on “improving criminal justice in cyberspace”, which sets out the broad policy direction to be followed.

This led the European Commission to produce a report in December 2016, which details a problem definition and details different options for jurisdiction, requests for data, etc.

Commissioner Jourova (responsible for justice, consumers and gender equality) has indicated that she expects the Commission to present three or four options for moving forward with the file at the next Justice and Home Affairs Council (JHA) Council meeting on 8 June, so the timetable is very compact. However, we do not expect a full legislative proposal at that stage.

Owen Bennett from the European Internet Services Providers Association (EuroISPA) said that systems are designed around the needs and capabilities of bigger companies. However, more cross-border access issues are rising for small companies due, for example, to lower roaming prices, more cross-border services. There is a huge increase in cross-border requests, which can create significant burdens.

EuroISPA stresses three key principles:

  1. On a high level, it is important that smaller services should only ever be expected to cooperate with local law enforcement. It is important to have clear rules to build on existing good cooperation. There are increasing demands received in foreign languages from foreign jurisdictions, with little clarity on legal obligations. Sometimes there is a legal obligation not to respond.
  2. Direct access is very worrying. There are also issues regarding the financing of procedures and the financial burden of the legal assessment of requests for data.
  3. Mutual legal assistance arrangements should remain the core of any new framework in this policy area.

Javier Ruiz then gave a summary of Open Rights Group’s (ORG) views of the negotiations between the United Kingdom and the US on access to data.

ORG met senior staff from the UK government during spring 2016 to discuss the proposed UK-US treaty. The discussion was based on US documents as the UK has not to date produced any paper trail.

The first thing to clarify is that, despite MLAT being portrayed as the problem this initiative has to solve, the proposed treaty is not about MLAT, but law enforcement accessing communications at an early stage of investigations, not to put the pieces together after a crime has been committed. MLAT would still need to be fixed.

Also, the proposed treaty would cover only interception of communications, which, in the UK at least, is not supposed to include requests for metadata. The UK police already asks US companies for metadata and it is legally possible for companies to disclose this information under US law. This does not cover content.

On the basis of the available information, ORG raised concerns that the system proposed was extremely weak with regard to safeguards or processes to smooth the interoperability of the UK and US jurisdictions. ORG saw it as throwing the systems against each other and hoping for the best. There have been lots of complaints from the US side of civil society about UK processes not being to the same level, which the UK government strenuously denies. Issues raised are independent authorisation, inadmissibility of intercept evidence in court, lack of equivalence to US restrictions on live wiretapping, etc.

ORG complained to UK government officials that the system appears to be designed from the point of view of UK access to US data, with little thought being invested in the reverse process. The response was that in real life such requests would never happen.  ORG also raised concerns with the lack of accountability mechanisms, purely relying on existing reporting. Since then, the Home Office (the UK ministry of the interior) has stated that they will be strengthening the processes, with a single point of contact out of the country. This is not enough, as US companies would have to deal directly with the British system, and vice versa.  At least you would need some common processes, and for the final administrative step of the warrant to be undertaken domestically so appeals and complaints could be handled in the same country.


06 Apr 2017

The European Parliament adopts another resolution critical of the Privacy Shield


On 6 April 2017, the European Parliament (EP) voted a motion for a resolution on the adequacy of the protection afforded by the EU-US Privacy Shield. The scheme gives the United States a unique arrangement for the transfer of personal data from the European Union to the United States. The Privacy Shield replaced the Safe Harbor decision, which used to serve the same purpose, until the Court of Justice of the European Union (CJEU) invalidated it in the Schrems case in 2015.

The EU-US Privacy Shield has been showered with criticism from the moment the details of the new(ish) rules were published. However, the European Commission (EC) proposed and adopted it anyway.

The Article 29 Data Protection Working Party of national data protection authorities and the European Union Data Protection Supervisor (EDPS) issued opinions expressing numerous concerns regarding the level of protection offered by the Privacy Shield and its compliance with the right to the protection of personal data and the right to privacy. Moreover, the EP adopted a similar resolution in May 2016, when the draft decision on Privacy Shield was adopted, but its recommendations seemed to be ignored.

Today, the EP has adopted a new resolution which regards many of the Privacy Shield’s provisions as inadequate. The resolution lists several problems in the agreement and calls on the Commission to thoroughly examine them in its first annual review in September 2017.

Among the issues listed in the resolution, the EP raises awareness about the lack of specific rules on automated decisions and of a general right to object and the need for stricter guarantees on the independence and powers of the Ombuds mechanism, the current non-quorate status of the Privacy and Civil Liberties Oversight board, as well as the lack of concrete assurances that the US agencies have established safeguards against mass and indiscriminate collection of personal data (bulk collection). Another flaw mentioned in the Parliament’s criticism is the fact that the Privacy Shield is based on voluntary self-certification and therefore applies only to US organisations which have voluntarily signed up to it, which means that many companies are not covered by the scheme.

Furthermore, the resolution asks the Commission to seek (long overdue) clarification on the legal status of the “written assurances provided” made by the US and to make sure the commitments taken under the new decision will be kept by the new US administration. Furthermore, the resolution calls on the European data protection authorities (DPAs) to monitor the functioning of Privacy Shield and to exercise their powers to suspend or ban data transfers “if they consider that the fundamental rights to privacy and the protection of personal data of the Union’s data subjects are not ensured.”

Unsurprisingly, the Parliament “with concern” the dismantling of the FCC’s privacy rules. Last but not least, the EP calls on the Commission to take all the necessary measures for the Privacy Shield to comply with the General Data Protection Regulation (GDPR) and with the Charter of Fundamental Rights of the European Union.

The Privacy Shield has already been brought to the CJEU by two advocacy groups: EDRi member Digital Rights Ireland (case number T-670/16) and EDRi observer La Quadrature du Net (case number T-738/16). If the CJEU applies the same reasoning as for the former Safe Harbour agreement, the Privacy Shield will need a replacement very soon. It is to be hoped that the EC is preparing the contingency plan to resolve this situation as soon as possible and not wait (again, like it did with Safe Harbour and the two Data Retention rulings) until it is forced to act by the Court of Justice. If the Commission does this then maybe, finally, fundamental rights can be protected on both sides of the Atlantic and both citizens and businesses can enjoy the benefits of increased trust in the online environment.

Civil society letter: Without reforms in US surveillance laws, the Privacy Shield must be suspended (02.03.2017)

Privacy Shield: Privacy Sham (12.07.2016)

European Parliament confirms that “Privacy Shield” is inadequate (26.05.2016)