03 Oct 2017

Battle lines drawn between citizens and internet giants in EU e-Privacy Regulation

By EDRi

On 2 October, the European Parliament Committee on Legal Affairs (JURI) and the Industry Research and Energy Committee (ITRE) voted on the e-Privacy Regulation, the Committee on Internal Market and Consumer Protection (IMCO) voted on 28 September. These votes will feed into the final decision to be taken by the Committee on Civil Liberties, Justice and Home Affairs (LIBE) next week.

The purpose of the e-Privacy Regulation is to protect the privacy and integrity of electronic communication. It covers both private and business communications.

The Parliament needs to stand firm and vote for measures to strengthen privacy and security online,

said Joe McNamee, Executive Director of European Digital Rights.

Caving in to lobbying from – or on behalf of – online data monopolies would undermine the rights of European citizens and businesses.

The Committee on Legal Affairs (JURI) has taken big steps to protect European citizens’ rights to privacy. It adopted amendments that ensure communications (such as emails) needs to be protected whether it is in transit or at rest, which will prevent your email service provider from reading your emails both when they are being sent and also when they are stored in the “cloud”. JURI also included a provision to protect privacy by design and by default, meaning that when you install a software, its setting will automatically prevent tracking and collection of data.

The Committee on Industry Research and Energy (ITRE) proposed a mix of positive and negative changes to the text. Among many issues, ITRE strongly defended effective encryption. On the downside, it supported the Commission proposal not to enforce security and privacy by design and proposed allowing further processing of confidential communications data without users’ “consent”. The first measure, de facto opposes the concept of privacy by design and by default – making the device as privacy friendly as possible without requiring any changes from the user. This supports the current industry practice of hiding privacy options in misleading language that is almost impossible to understand. The second measure could become a loophole to allow private communications to be re-used for any purposes that a company considers “compatible” with the initial collection of the data.

By far the most hostile to the interests of citizens and of security of business communications is the Committee on Internal Market and Consumer Protection (IMCO). It voted on the e-Privacy Regulation on 29 September. In line with an avalanche of broadly pro-Google lobbying, IMCO weakened the Commission’s proposal in its Opinion by reducing the scope of the regulation and its safeguards for citizens. With IMCO’s suggestions, the e-Privacy Regulation would be considerably weakened.

The e-Privacy Regulation has been under review since 2016, after the adoption of the new General Data Protection Regulation.

FAQ e-Privacy
https://edri.org/epd-faq/

e-Privacy revision: Document pool (10.01.2017)
https://edri.org/eprivacy-directive-document-pool/

Quick guide on the proposal of an e-Privacy Regulation (09.03.2017)
https://edri.org/files/epd-revision/ePR_EDRi_quickguide_20170309.pdf

Twitter_tweet_and_follow_banner

close
28 Sep 2017

European Parliament Consumer Protection Committee chooses Google ahead of citizens – again

By Joe McNamee

On 28 September, the European Parliament Committee on Internal Market and Consumer Protection (IMCO) adopted its Opinion on the proposed e-Privacy Regulation. Just as it did when reviewing the General Data Protection Regulation (GDPR), it is fighting hard to minimise the privacy and security of the European citizens it is meant to defend.

Currently, the surveillance-based online advertising market is dominated by Facebook and Google. It was estimated that, in the US, 99% of growth in the sector is being absorbed by those two companies. Most of the amendments adopted in IMCO serve the purpose of defending this anti-competitive, anti-choice, anti-privacy, anti-innovation framework.

Some of the many egregious efforts to water down the proposal include:

  • The Opinion adds a loophole to the text to reduce protection of communications. It suggests that the confidentiality of emails and other electronic communications should be protected only when they are “in transit”. This contradicts the entire logic of the legislation and, crucially, will allow companies such as Google to monitor the content of communications – when not “in transit”, but stored in their servers – to intensify their profiling of users.
  • It supported the European Commission’s position that devices and software should not prevent unauthorised access by default. Instead, there should simply be an option – possibly hidden somewhere in the settings, as is typical – to set security levels. Ironically, this position completely contradicts other EU legislation, which criminalises unauthorised trespassing on computer systems. It also contradicts the GDPR, which foresees data protection by design and by default.
  • The Opinion suggests that “further processing” of metadata – information about our location, the times we communicate, the people we communicate with and so on – should not require consent. The activity permitted by this is not vastly different from the definition of stalking in Dutch law: “systematically and deliberately intrudes someones personal environment with the intention to force the other to do something, not to do something or to tolerate something…”(translation adapted from Wikipedia)
  • Instead of requiring providers to anonymise or delete personal data that is no longer needed, the IMCO Committee would also allow unnecessary personal data, if they use pseudonymisation. This means that the data collected and used by the provider could, at any moment be re-identified, creating unnecessary security, data protection and privacy risks for the individuals concerned.
  • Rather cleverly, the Committee has exploited child protection to make profiling of users easier for companies. By saying that electronic communications data “shall not be used for profiling or behaviourally targeted advertising purposes” for children, it implies that this is acceptable for adults.
  • An obligation to inform users about security risks that have been discovered and how to minimise risks has been simply deleted from the Opinion.
  • The committee did not amend the Commission’s proposal to allow people to be tracked through their mobile devices as they move around physically (in towns, malls, airports, and so on). Individuals are expected to opt out individually each time they enter an area with one or more tracking networks – on condition that they see and react to the one or more signs that indicate that tracking is happening.

The extremist anti-consumer position of the Committee on Internal Market and Consumer Protection was rightfully ignored in the adoption of the General Data Protection Regulation. We can only hope that, in addition to having its name changed to the “Committee on Internal Market and Consumer Protection”, its Opinion will be ignored also this time.

IMCO Compromise Amendments to the proposal for a e-Privacy Regulation
http://edri.org/files/eprivacy/eprivacy_imco_ca_20170928.pdf

e-Privacy revision: Document pool
https://edri.org/eprivacy-directive-document-pool/

Are we on the right track for a strong e-Privacy Regulation? (28.07.2017)
https://edri.org/are-we-on-the-right-track-for-a-strong-e-privacy-regulation/

(Contribution by Joe McNamee, EDRi)

Twitter_tweet_and_follow_banner

close
28 Sep 2017

Commission’s position on tackling illegal content online is contradictory and dangerous for free speech

By Joe McNamee

Today, on 28 September, the European Commission published its long-awaited Communication “Tackling Illegal Content Online”. This follows a leaked copy we previously analysed.

The document puts virtually all its focus on internet companies monitoring online communications, in order to remove content that they decide might be illegal. It presents few safeguards for free speech, and little concern for dealing with content that is actually criminal.

Key points to note from the document are:

  1. The Commission hastily removed references to domain name services having “a role to play in defining solutions” for illegal content, following the chaotic efforts last week by the Spanish government to shut down referendum websites in Catalonia using the domain name registry. This is a very good example of how the simplistic “solutions” proposed by the Commission run into trouble as soon as they are faced with real world situations.
  2. The Communication embarrassingly contradicts the Commission’s hugely controversial proposed Copyright Directive. Under the proposed Directive, all hosting services that optimise content in any way whatsoever are understood to be “active” and therefore presumed to be aware of illegal activities. Under the Communication on tackling illegal content online, internet providers are understood not to have knowledge of illegal content, even if they are actively searching for it.
  3. The Communication explicitly abandons Europe’s role in defending free speech internationally. In the final set of edits, it completely deleted a reference for the need to be consistent with, for example, the EU Human Rights Guidelines on Freedom of Expression Online and Offline.
  4. The Commission makes no effort at all to reflect on whether the content being deleted is actually illegal, nor if the impact is counterproductive. The speed and proportion of removals is praised simply due to the number of takedowns. However, in response to a Parliamentary Question about the takedowns that follow reports of serious crime sent from Europol to services providers, the European Commission said: “The EU IRU [Internet Referral Unit] does not keep any statistics of how many of the referrals to Member States led to the opening of an investigation.” The Commission points out that “removal of illegal content should not affect investigations”, but appears entirely indifferent as to whether it does or not. This is reinforced by the explanation that providers should cooperate with law enforcement authorities “where appropriate”.
  5. The European Commission puts huge faith in the use of “trusted flaggers”, which are organisations that are “trusted” to submit valid complaints about illegal content. These can then be assumed to be correct and the content can be removed more quickly. According to the Commission, “trusted flaggers can be expected to bring their expertise and work with high quality standards”. Indeed, the Commission is so trusting of “trusted flaggers” that it says that “balance needs to be struck between ensuring a high quality of notices coming from trusted flaggers” and “the burden in ensuring these quality standards”. For context, 99,95% of complaints made to Google under the Trusted Copyright Removal Programme referred to sites that were not even in Google’s index, let along illegal.

The Commission’s approach of fully privatising freedom of expression online, its almost complete indifference to the impact of this privatisation for fundamental rights and its apparent indifference to the achievement of underlying public policy aims was described clearly by Nietzsche in “Human, All too Human” (1878):

“Finally – one can say this with certainty – distrust of all government, insight into the uselessness and destructiveness of these short-winded struggles will impel men to a quite novel resolve: the resolve to do away with the concept of the state, to the abolition of the distinction between private and public. Private companies will step by step absorb the business of the state: even the most resistant remainder of what was formerly the work of government (for example its activities designed to protect the private person from the private person) will in the long run be taken care of by private contractors.”

…and Homer Simpson:

Homer: Oh, Moe, how do I make ’em like me?
Moe: Uh, gee, you’re kind of all over the place there, Homer. You need to
focus here. You got to think hard and come up with a slogan that appeals
to all the lazy slobs out there.
Homer: [Groans] Can’t someone else do it? “Can’t someone else do it?”
Moe: That’s perfect.
Homer: It is?
Moe: Yeah. Now get out there and spread that message to the people.
Homer: Whoo-hoo!
Moe: Whoa, hey. You didn’t pay for the beer.
Homer: Can’t someone else do it?

European Commission’s Communication “Tackling Illegal Content Online” (28.09.2017)
https://ec.europa.eu/transparency/regdoc/rep/1/2017/EN/COM-2017-555-F1-EN-MAIN-PART-1.PDF

Leaked document: Does the EU Commission really plan to tackle illegal content online? (21.09.2017)
https://edri.org/leaked-document-does-the-eu-commission-actually-aim-to-tackle-illegal-content-online/

(Contribution by Joe McNamee, EDRi)

Twitter_tweet_and_follow_banner

close
26 Sep 2017

Letter to the FCC: The world is for net neutrality

By Maryant Fernández Pérez

Today, 26 September 2017, European Digital Rights (EDRi) and over 200 other civil society organisations and businesses joined forces to send a letter to the head of the US Federal Communications Commission (FCC) with a clear message: the world is for net neutrality. In the letter, we express concerns about the negative impact the rollback of US Title II net neutrality rules can have on the world’s shared internet ecosystem.

This letter defends the internet as a global, open and non-discriminatory network. Saving net neutrality again will benefit creativity, innovation, net competition, and the economy. It will also foster our rights to free speech and access to knowledge.

You can read the letter here and below:

Dear FCC Chairman Ajit Pai,

We are companies and organisations headquartered outside the United States of America, and we are concerned about how the rollback of US Title II net neutrality rules could negatively impact the world’s shared Internet ecosystem.

The Internet has been such a social and economic success because permissionless any-to-any communication is at its core. Net neutrality allows online business or any societal movement equal access to a global audience – undermining this principle would create significant social and economic harms.

Access to the entire Internet is not only vital to American business and society, it is essential to businesses and people outside the United States. We also depend on a strong competitive framework and legal foundation to ensure that Internet service providers (ISPs) cannot create barriers to commerce and free speech by discriminating against websites, services, and apps, or by imposing new fees that harm businesses and consumers.

The open Internet makes it possible for all of us to bring our best business ideas to the world without interference or seeking permission from any gatekeeper first. This is possible because the principle of net neutrality ensures that everyone, no matter where they are located, has unimpeded access to Internet opportunities.

The FCC’s longstanding commitment to protect the open Internet is a central reason why the Internet remains an engine of entrepreneurship and economic growth both in the US and outside its borders. We are deeply concerned that the proposed regulatory changes to net neutrality will undermine free speech and competition on the Internet. Despite assurances to the contrary, the changes proposed by the FCC would remove the only existing legal foundation strong enough to ensure the United States will continue to honor the principle of net neutrality.

An FCC rollback of net neutrality provisions would grant US Internet service providers like AT&T, Comcast and Verizon new powers to control the Internet. Ultimately, these changes will allow US Internet access providers to demand payment from online services for the right to have privileged access to that provider’s customer base, creating a patchwork of new monopolies to replace the existing open market. This will fragment the market, destroy economies of scale, reduce incentives for innovation, undermine social movements and rip the soul out of the Internet.

We urge you to maintain strong net neutrality rules and focus on policies that encourage the deployment of new network infrastructure, and create greater choice and competition amongst Internet service providers.

Thank you for considering our views.

CC: Members of Congress

Twitter_tweet_and_follow_banner

close
21 Sep 2017

Leaked document: Does the EU Commission really plan to tackle illegal content online?

By Joe McNamee

On 14 September, Politico published a leaked draft of the European Commission’s Communication “Tackling Illegal Content Online”. The Communication contains “guidelines” to tackle illegal content, while remaining coy in key areas. It is expected to be officially published on 28 September.

Introduction

The European Commission’s approach builds on the increasingly loud political voices demanding that somebody does something about bad things online, while carefully avoiding any responsibility for governments to do anything concrete. Indeed, not alone does the introduction barely mention the role of states in fighting criminal activity online, it describes (big) platforms as having a “central role” in society. States can investigate, prosecute and punish crimes, online and offline; internet companies cannot – and should not.

Investigation and prosecution

In line with existing practice, the focus is on headline-grabbing actions by internet companies as THE solution to various forms of criminality online.

Under the Commission-initiated industry “code of conduct” on hate speech, the law is downgraded behind companies’ terms of service. The Communication continues in the same vein.

The scale of the lack of interest in investigating and prosecuting the individuals behind the uploads of terrorist material – or even in whether or not this material is even illegal – is proven by a recent response to a parliamentary question in which the Commission confirmed that the EU Internet Referral Unit “does not keep any statistics of how many of the referrals to Member States led to the opening of an investigation.” Instead, the draft Communication lists some statistics about the speed of removal of possibly illegal content, without a real review mechanism, measures to identify and rectify counterproductive effects on crime-fighting, recommendations on counter-notice systems or any attempts in support of real, accountable transparency.

Duty of care – intermediaries

The draft Communication asserts, with no references or explanation, that platforms have a “duty of care”. It is difficult to work out if the Commission is seeking to assert that a legal “duty of care” exists. Such duties are mentioned in recital 48 of the E-Commerce Directive. However, correspondence (pdf) between the Commission and former Member of the European Parliament (MEP) Charlotte Cederschiöld (EPP) at the time of adoption of the Directive proves conclusively that no such “duties” exist in EU law, beyond the obligations in the articles of the E-Commerce Directive.

Duty of care – national authorities

The draft Communication suggests no diligence for national authorities regarding review processes, record-keeping, assessing counterproductive effects, anti-competitive effects, over-deletion of content, complaints mechanisms for over-deletion or investigation or prosecutions of serious crimes behind, for example, child abuse. Apparently, the crimes in question are not serious enough for Member States to have a duty of care of their own. Instead, they hide behind newspapers’ headlines. The German Justice Ministry indicated, for example, that it had no idea at all about the illegality of 100 000 posts deleted by Facebook nor, if they were illegal, whether any of the posts had been investigated (pdf).

Protecting legal speech, but how?

The draft Communication puts the emphasis on asking companies to proactively search for potentially illegal content, “strongly encouraging” “voluntary”, non-judicial measures for removal of content, and encouraging systems of “trusted flaggers” to report and remove allegedly illegal content more easily. While the European Commission makes reference to the need for adequate safeguards “adapted to the specific type of illegal content concerned”, it fails to suggest any protection or compensation for individuals in cases of removal of legal content, besides a right of appeal or measures against bad-faith notices. The leaked Communication also fails to contemplate any measures to protect challenging speech of the kind the European Court of Human Rights insisted must be protected.

Regulation by algorithm

It is very worrisome that the Commission is encouraging and funding automatic detection technology, particularly when at the same time it recognises that “one-size-fits-all rules on acceptable removal times are unlikely to capture the diversity of contexts in which removals are processed”. It is also worrisome that the leaked Communication claims that “voluntary, proactive measures [do] not automatically lead to the online platform concerned playing an active role”. This means that the Commission believes that actively searching for illegal content does not imply knowledge of any illegal content that exists. Ironically, in the Copyright Directive, the Commission’s position is that any optimisation whatsoever of content (such as automatic indexing) does imply knowledge of the specific copyright status of the content. With regard to automatic detection of possible infringements, the Commission recognises human intervention as “best industry practice”. It refers to human intervention as “important”, without actually recommending it, despite acknowledging that “error rates are high” in relation to some content.

In addition, astonishingly, the draft Communication suggests that we need to avoid making undue efforts to make sure that the (possibly automatic) removals demanded by these non-judicial authorities are correct: “A reasonable balance needs to be struck between ensuring a high quality of notices coming from the trusted flaggers and avoiding excessive levels of administrative burden”, the leaked Communication says.

Points worth keeping in the final draft

To be fair, the draft Communication contains also some positive points: It is welcome that the Commission recognises that…

  • big online platforms are not the only actors that are important;
  • “the fight against illegal content online must be carried out with proper and robust safeguards balancing all fundamental rights … such as freedom of expression and due process” – even if the draft Communication doesn’t mention who should provide them, what they are or to whom they should be available;
  • “a coherent approach to removing illegal content does not exist at present in the EU”;
  • the “nature, characteristics and harm” of illegal content is very diverse, leading to “different fundamental rights implications” and that sector-specific solutions should be addressed, where appropriate;
  • harmful content “is – in general – not illegal”;
  • “the approach to identifying and notifying illegal content should be subject to judicial scrutiny”;
  • the possibility of investigation should be facilitated – even if it omits to mention any obligations on transparency with regard to if or how often there are investigations;
  • the role of “trusted flaggers” should comply with certain criteria – even if the draft Communication does not mention what that would be – and that they should be auditable, accountable and that abuses must be terminated;
  • notices must be “sufficiently precise and adequately substantiated”;
  • content-dependent exceptions are foreseen for automatic stay-down procedures, even if the Commission makes unsubstantiated and at least partly false assertions about the effectiveness of such measures;
  • transparency reports are encouraged, even though nothing in the draft would resolve the total failure of transparency evident in the implementation of “hate speech code of conducts”;
  • counter-notice procedures are important and therefore encourages them;
  • filtering technologies have limitation, even if they are not mentioned in all relevant parts of the draft and even if their damaging impact on freedom of expression is not duly addressed.

We can only hope that these important elements remain in the final draft. We participated in expert meetings where we provided a suggested way forward. The Commission knows what is needed. If it is to respect its obligations under the Charter of Fundamental Rights of the European Union, and if it is to avoid the recklessness demonstrated by the lack of review mechanism of, for example, the Internet Referral Units. We will find out if the Commission has the courage to deliver. Further improvements are urgently needed before the final version is published next week.

Twitter_tweet_and_follow_banner

close
20 Sep 2017

Human Rights Court sets limits on right to monitor employees

By Anne-Morgane Devriendt

On 5 September 2017, the Grand Chamber of the European Court for Human Rights (ECtHR) ruled on the Bărbulescu v. Romania case. It found that there was a breach of the right to family life and correspondence (Article 8 of the European Convention on Human Rights), as claimed by Mr Bărbulescu. Mr Bărbulescu was fired after his employer monitored his communications and found that he had used company property to exchange messages with family members. Although the ruling does not forbid employee monitoring, it clarifies how this can be done respecting fundamental rights.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

The Grand Chamber questioned the earlier national court decisions. It noted that national courts did not properly assess whether Mr Bărbulescu had been warned that he might be monitored, and to what extent he would be monitored. The Court also clarified the limits regarding legal monitoring of an employee by their employer and the ways national courts should assess them.

First, one of the key aspects that the Court pointed out was the lack of information given to Mr Bărbulescu on the monitoring to which he might be subject. Second, the Court ruled that, in addition to the obligation of providing information, monitoring of employees always needs to be done for a legitimate aim, and in a way that is proportionate to that aim and that does not breach their privacy more than necessary to achieve the goal. None of these safeguards had been followed in this case, as the Court pointed out in the paragraph 140 of its ruling: “the domestic courts failed to determine, in particular, whether the applicant had received prior notice from his employer of the possibility that his communications on Yahoo Messenger might be monitored; nor did they have regard either to the fact that he had not been informed of the nature or the extent of the monitoring, or to the degree of intrusion into his private life and correspondence. In addition, they failed to determine, firstly, the specific reasons justifying the introduction of the monitoring measures; secondly, whether the employer could have used measures entailing less intrusion into the applicant’s private life and correspondence; and thirdly, whether the communications might have been accessed without his knowledge”.

It needs to be stressed that the ruling does not find monitoring of employees’ communications illegal in all situations, but that the power to monitor employees is limited. The judgement limits the employers’ right to monitor employees’ communications by limiting the scope and degree of intrusion, legitimate justification and proportionality of the monitoring. All of these should have been done in this case and should be in any similar cases in the future. The Court clarified that an employee keeps enjoying his right to private and family life also in the workplace.

Press release for the Grand Chamber judgement (05.09.2017)
http://hudoc.echr.coe.int/eng?i=003-5825428-7419362

Romanian whose messages were read by employer “had privacy breached” (05.09.2017)
https://www.theguardian.com/law/2017/sep/05/romanian-chat-messages-read-by-employer-had-privacy-breached-court-rules

Privacy International response to Grand Chamber of the European Court for Human Rights Bărbulescu v. Romania judgement (05.09.2017)
https://medium.com/@privacyint/privacy-international-response-to-grand-chamber-of-the-european-court-for-human-rights-barbulescu-v-cc722b73086b

(Contribution by Anne-Morgane Devriendt, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
20 Sep 2017

Dutch digital investigation: Pushing the boundaries of legality

By Bits of Freedom

The Dutch court is currently considering the case against Naoufal F, in which the police made use of several advanced digital investigation methods that challenge the boundaries of the law.

A key issue in the case is the way in which the police gained access to and analysed the secure communication of suspects. Inez Weski, the lawyer of multiple suspects in this case, claims that in the process, the police violated so many rules that the trial must be stopped. The judge is of a different opinion. However, that doesn’t mean this will be the end of the discussion.

As part of the investigation into (failed) assassinations, the police discovered individuals under investigation were using the Dutch company Ennetcom’s PGP-phones and communication network. One of them was Naoufal F. The communication was secured with PGP, a commonly used encryption method. To read this encrypted information, the police either needed to have the key, or to bypass the encryption, for example by hacking.

The police confiscated one of Ennetcom’s Canadian servers and made a copy of the data in it. Besides the encrypted information, the server apparently contained the PGP-keys with which the encrypted communication was secured. The communication could thus be decrypted and read. The police also succeeded in reading the communications that was stored on the PGP-phones.

The police then used Hansken, a forensic search engine developed by the Dutch Forensic Institute (NFI), to search all the gathered information.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

The question is if the police were allowed to do this. First, Weski compares the confiscating of the server with casting a dragnet over a communications network. According to Weski, a great deal of communications of unsuspected persons has been unjustly gathered. She claims Ennetcom offered a perfectly legal service that, although it might also have been used by criminals, was used by companies, governments and innocent citizens. Therefore, the server should never have been confiscated in this manner. A striking detail is that, according to Weski, the PGP-keys were not on the server at all, but they were stored by another company. If that turns out to be true, that might change the case significantly.

In addition, Weski believes the Hansken forensic search engine used in the investigation is, in itself, an “extralegal” investigation tool that should not have been used – extralegal meaning that there is no definition of such investigation tool in law. There are occasions when the use of an extralegal tool is allowed, for example, if there is no major violation of the rights of a suspect, and if the use does not pose a risk for the integrity and manageability of the investigation. However, Weski believes Hansken does not meet these requirements.

Weski wanted the trial stopped due to grave errors and problems in the investigation, and asked for the case to be ruled inadmissible. The police obviously disagree, as does the Public Prosecution Service. The judge ruled that the trial can continue.

This case shows yet again that the police increasingly and more easily gain access to large amounts of information – first of all because there is simply being more data available. Secondly, by using more advanced analysis techniques, more and more information can be extracted from the available data. Information that in itself might seem unimportant, can become valuable when combined with other information. This results in more intrusive analysis of personal data.

It also causes the nature of the police’s work to change fundamentally, because the emphasis lies even more on automated data processing. In this case, both components come together: there is a server available with a huge amount of information, and an advanced analysis tool is available for searching that data.

The Dutch Code of Criminal Procedure is no longer aligned with digital developments. The Ministry of Security and Justice has launched a concept proposal that will be able to face the “new” challenges of digitalisation, and that is meant to replace the current Code.

However, also the concept proposal falls short in providing answers to the problems that surface in the case against Naoufal F. For instance, a better oversight of digital investigation is needed. It is also necessary to re-think the gathering of large datasets that include data of innocent citizens. Finally, there should be better rules concerning the analysis of that data. The new law for the secret services includes a separate rule for analysis of data. That is not the case in the new law for the police. Why should less stringent rules apply?

Does digital investigation fit the confines of the law? (only in Dutch, 30.08.2017)
https://bof.nl/2017/08/30/past-de-digitale-opsporing-nog-wel-in-het-wetboek/

Case against Naoufal F. trial case for justice (only in Dutch, 29.08.2017)
https://www.nrc.nl/nieuws/2017/08/29/strafzaak-tegen-naoufal-f-testcase-voor-justitie-12720148-a1571435

(Contribution by Ton Siedsma, EDRi member Bits of Freedom, the Netherlands; Translation: Ludwine Dekker)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
20 Sep 2017

Secret documents reveal: BND attacked Tor and advises not to use it

By Guest author

The German spy agency BND developed a system to monitor the anonymity network Tor and warned federal agencies that its anonymity is “ineffective”. This is what emerges from a series of secret documents published by the German Netzpolitik blog. The spies handed a prototype of this technology over to the US National Security Agency (NSA), in expectation of a favour in return.

The story begins a few weeks prior to the annual SIGINT Development Conference in 2008 when BND hackers “developed the idea of how the Tor network could be monitored relatively easily”, according to internal BND documents. In March 2008, the spy agency filled in its partners from the US and UK. During a visit of a foreign delegation to Munich, a BND unit presented “the anonymity network Tor and a possible disbandment of the anonymity feature”. In order to implement the plan, the BND hoped for “an international cooperation with several foreign intelligence agencies”.

Both NSA and the UK Government Communications Headquarters (GCHQ) expressed “a high interest” and offered support. The three secret services decided on further meetings and the creation of a project group. The BND hackers told the NSA about “a possibility to penetrate the Tor network”, a term commonly used for the infiltration of IT systems. In this case, the data suggests that the spy agencies wanted to exploit a design decision Tor publicly specified.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

Because of a lack of interest in the project within the BND, it was stated that “further development is primarily geared to the needs of the partner”, meaning the NSA. The proof of concept was already “a good status to talk to the experts of the Yanks”. While the BND hoped that their analysts could be “pushed” to work on Tor, their true goal was bigger. The BND wanted something from the NSA: a technology from the “field of cryptanalysis”, to decipher encrypted communication.

On 20 February 2009, a 16-page “concept for tracking internet traffic, which has been anonymized with the Tor system” was finalised. The cover is far from modest: a vegetable chopper over the logo – an onion – of the Tor network. Precisely how the BND planned to “chop” Tor is unfortunately described in the redacted parts of the document Netzpolitik obtained. But to implement the attack, it is probable that the BND ran its own servers in the Tor network pointing to passive snooping servers, which are presumably operated by the NSA, and emphasises the “protection of the anonymity” of the spy agencies.

Three weeks after the concept paper, the GCHQ was “very interested in the [BND’s] access to the Tor network”, the internal report of a meeting at the BND headquarters says. Both parties agreed to arrange further technical discussions and a “joint workshop on possible technical and operational procedures”. Five days afterwards the Americans accepted the offer of the concept paper by the BND – the NSA and GCHQ took over the project. Whether the BND received the compensation it hoped for, remains unknown. When Netzpolitik confronted the BND with a set of specific questions, they received only the boilerplate answer: “As a matter of principle, the BND talks about operational aspects of its work only with the Federal Government and the competent authorities of Parliament.”

One and a half years later, the BND warned German federal agencies not to use Tor. The hacker unit “IT operations“ entitled its report: “The anonymity service Tor does not guarantee anonymity on the internet”. According to the executive summary, Tor is “unsuitable” for three scenarios: “obfuscating activities on the internet”, “circumventing censorship measures” and “computer network operations for intelligence services” – spy agency hacking. The BND assumes “a very high level of surveillance within the network”, including the possibility that anyone can “set up their own so-called exit nodes for monitoring”.

According to the BND, “Tor is predominantly used to conceal activities, where users are not convinced of the legality of their actions. The number of Tor users who aim at preserving anonymity out of mere privacy considerations is relatively small.” The BND bases this statement on “several pieces of intelligence”, but does not underpin it with any facts.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Netzpolitik reached out to several people from the Tor project, but nobody had any idea how the BND came up with this hypothesis. “That sounds like nonsense,” said IT security advisor Jens Kubieziel, who is a system administrator for the Tor project and runs large Tor exit nodes.

Spy agencies and other agencies worldwide “have ways to counter anonymity. One of them is to set up own Tor nodes and monitor those intensively to gather intelligence and evidence”. The spy agencies do not treat this as a secret: “Some agencies have already reported about installing their own Tor nodes and using the logged data for different projects and criminal investigations.”

Looking at the activities of the NSA and GCHQ, the BND’s concern might just be justified. Two years after the Germans presented their gift, the spy agencies continued their work on breaking Tor. The efforts of the British team is documented in the GCHQ’s internal wiki, published by German magazine Der Spiegel from the Snowden archive.

Well-funded international spy agencies continue to refine their attacks. But the Tor community also continues to improve the project and fight off attacks – in close collaboration with the privacy research community. Project leader Roger Dingledine is skeptical as to whether spy agencies are able to make their attacks “work at scale”. Nevertheless, the documents show that “we need to keep growing the Tor network so it’s hard for even larger attackers to see enough Tor traffic to do these attacks.”

However, according to Dingledine that is not enough: “We as a society need to confront the fact that our spy agencies seem to feel that they don’t need to follow laws. And when faced with an attacker who breaks into internet routers and endpoints like browsers, who takes users, developers, teachers, and researchers aside at airports for light torture, and who uses other‚ classical measures – no purely technical mechanism is going to defend against this unbounded adversary.”

This is a shorter version of an article by Netzpolitik https://netzpolitik.org/2017/secret-documents-reveal-german-foreign-spy-agency-bnd-attacks-the-anonymity-network-tor-and-advises-not-to-use-it/

(Contribution by André Meister, EDRi observer AK Zensur, Germany; Adaptation by Maren Schmid, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
20 Sep 2017

Did the EU Commission hide a study that did not suit their agenda?

By Maren Schmid

In 2013, the European Commission announced a launch of a study on copyright – and never published its results. Julia Reda, a Member of the European Parliament (MEP), tabled a freedom of information request on this issue and was eventually granted access to the study.

Even though the independent study was finalised in 2015 and financed by public funds, the European Commission failed to publish the research. A possible reason for this behaviour might be that the key results do not seem to serve the initial purpose of the study – to justify the plans of introducing stricter copyright legislation as part of the reform launched by EU Commissioner Günther Oettinger.

The study’s main focus was how online piracy affects the sales of copyrighted content in four different industries: music, films, books and games. Between September and October 2013, a representative survey was conducted with approximately 30 000 people from six EU Member States (Germany, France, Poland, Spain, Sweden, and UK).

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

One of the main conclusions of the study states that there is no robust statistical evidence of displacement of sales by online piracy. This means that the study could not prove any negative consequences of piracy on the sales of copyrighted content. In fact, the study even found a slight positive trend in the gaming industry, implicating that the unauthorised playing of games eventually leads to paying for games.

The only partial exception to this is the film industry, where the consumption of ten pirated movies leads to four fewer cinema visits and thereby to a loss of five percent of current sales volume. This might be due to the higher price policy for films in comparison to the music, books and games industry.

Interestingly, these results concerning the film industry found their way to a publication of an academic paper by Benedikt Hertz and Kamil Kiljański, both members of the chief economist team of the European Commission. Yet the other unpublished results, showing no negative impact of piracy in the music, book and games industry, were not mentioned in the paper. Beyond that, the original study itself is not referred to either.

This seems to substantiate suspicion that the European Commission was hiding the study on purpose and cherry-picked the results they wanted to publish, by chosing only the results which supported their political agenda towards stricter copyright rules.

We understand that the Commission says that it is a complete coincidence that its decision to publish the study, a year and a half after it was finished, happens to coincide with Ms Reda’s freedom of information request. If this is the case, it would be a pity : Having experienced delays, obstruction and obfuscation from the European Commission in response to freedom of information requests, we thought that this time, it had at least acted in an appropriate, honest and timely manner this time, in response to Ms Reda’s request.

Estimating displacement rates of copyrighted content in the EU
http://ted.europa.eu/TED/notice/udl?uri=TED:NOTICE:276982-2013:TEXT:EN:HTML&tabId=1

Access to documents request: Estimating displacement rates of copyrighted content in the EU (27.07.2017)
https://www.asktheeu.org/en/request/estimating_displacement_rates_of#incoming-14307

Estimating displacement rates of copyrighted content in the EU, Final Report
https://netzpolitik.org/wp-upload/2017/09/displacement_study.pdf

Movie Piracy and Displaced Sales in Europe: Evidence from Six Countries
https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2844167

Copyright reform: Document pool
https://edri.org/copyright-reform-document-pool/

(Contribution by Maren Schmid, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
20 Sep 2017

Should video-sharing platforms be part of the AVMSD?

By Maryant Fernández Pérez

The Audiovisual Media Services Directive (AVMSD) is currently being reformed. After going through several legislative stages, the AVMSD is now being negotiated in trilogues, that is, informal, secret negotiations between the European Parliament (representing citizens) and the Council (representing EU Member States), facilitated by the European Commission (representing EU interests). As part of the negotiations, a key question will have to be addressed: should some or all video-sharing platforms be covered by the AVMSD and, if so, how?

On the one hand, there are demands for holding video-sharing platforms like YouTube responsible for content (including legal content) that is published on their sites or apps because of the impact online content has on the public debate and our democracies. On the other hand, these platforms are not producing or publishing content, but only hosting it. The AVMSD covering platforms that are so radically different from those that the Regulation was originally created to regulate – cross-border satellite TV services – would not make sense, as EDRi’s position paper, published on 14 September 2017, argues.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Video-sharing platforms, and social media generally, are not traditional media. While their activities influence (and even manipulate) the population, regulating video-sharing platforms as traditional media is not the solution to undesired impacts on our societies. When two services – linear broadcasting of editorially-controlled content and non-linear hosting of content produced by others – are significantly different, achieving a level playing field through a “one-fits-all” approach is not always possible. The consequences of getting it wrong can have a damaging effect on freedom of expression, competition, the fight against illegal material online and the protection of children in the online environment. At the Council meeting, seven Member States made unusually impassioned pleas to reject the proposed approach, mainly on grounds of freedom of expression. For these reasons, the deletion of the provisions that extend the scope of the AVMSD would be the most rational option, as the EDRi’s position paper suggests.

Failing deletion, EDRi recommends to clarify the definition of what constitutes “video-sharing platforms” and “user-generated content”. In addition, EDRi’s position paper asks for more predictability when asking companies to take action, to avoid abuses, ensure predictability and defend freedom of expression. For instance, some proposals on the table in the trilogue negotiations ask video-sharing platforms to restrict incitement to hatred based on political opinions or “any other opinions”. Asking platforms to delete hate speech based on “any other opinions” is likely to lead to arbitrary restrictions, and affect how we express ourselves online. Another reason to be cautious is that certain provisions would ask these companies to have a “self-regulatory” role in the “moral” development of children. Do we really want companies to decide what is good for the “moral” development of our kids?

Fighting against illegal hate speech, terrorism and child abuse is very important. However, asking companies, to decide what should be acceptable or not in our society is worrisome. Numerous examples demonstrate that content is being restricted in video-sharing and social media platforms without accountability or real redress. Creating a situation where video-sharing platforms are forced to regulate more of our communications and give themselves more leeway to decide on what content we can access or not, despite what the law deems to be illegal, will not be beneficial for the EU.

EDRi position on AVMSD trilogue negotiations (14.09.2017)
https://edri.org/files/AVMSD/edriposition_trilogues_20170914.pdf

ENDitorial: AVMSD – the “legislation without friends” Directive? (14.06.2017)
https://edri.org/avmsd-the-legislation-without-friends-directive/

Audiovisual Media Services Directive reform: Document pool
https://edri.org/avmsd-reform-document-pool/

(Contribution by Maryant Fernández Pérez, EDRi)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close