A measure which would be illegal if implemented by a government should also be illegal if implemented by industry as a “voluntary” measure, as a result of government pressure or for public relations or anti-competitive reasons. However, as key international legal instruments, such as the European Charter of Fundamental Rights and the European Convention on Human Rights, as well as national constitutions are binding for states and governments, they are not directly applicable to other entities, such as private companies. As a result, there is a major trend towards governments persuading or coercing companies to impose restrictions on fundamental freedoms under the guise of “self-regulation,” thereby circumventing legal protections.

26 Apr 2018

Press Release: “Fake news” strategy needs to be based on real evidence, not assumption


Today, 26 April 2018, the European Commission adopted a Communication on “tackling online disinformation”. European Digital Rights (EDRi), The Civil Liberties Union for Europe (Liberties) and Access Now will jointly respond by issuing a joint shadow report in the coming weeks.

“Good policy is based on evidence. For the moment, we have different initiatives from the European Commission that do not even agree on how to define the problem being addressed”,

said Maryant Fernández Pérez, Senior Policy Advisor at European Digital Rights (EDRi).

First we have to understand the problem we face: the real effect of fake news. For that, we need research and data. Liberties urges policy makers to refrain from placing disproportionate limits on free speech and privacy. Doing so will not solve the problem of fake news, but make the situation worse,” said Eva Simon, advocacy officer for freedom of expression at Liberties.

Policy makers should move away from generic and misleading actions under the false umbrella term of ‘fake news’. Access Now urges all actors to adopt, strengthen and respect enforceable privacy rules around online tracking which can solve challenges in the information ecosystem including the spreading of misinformation and profiling of users“, added Fanny Hidvégi, European Policy Manager at Access Now.

We urge the European Commission not to rush into taking binding measures regarding “fake news” or “online disinformation” but rather, take the expertise of civil liberties and digital rights experts into account. Liberties, EDRi and Access Now highlight that any and all measures aimed at addressing online disinformation should:

  • have a clear and narrow problem definition;
  • be based on clear empirical data of actual harms that are of a scale that merits intervention;
  • fully respect international human rights law on freedom of expression, personal data protection and privacy;
  • have clear benchmarks;
  • be subject to rigorous ongoing review to prevent counterproductive effects for freedom of expression, privacy and the public policy goals of the measures;
  • not lead to harmful consequences for the technical functioning of the Internet; among others, they should avoid its fragmentation, and ensure that its security, stability and resiliency is intact; and
  • avoid any measure, such as ancillary copyright, which would serve to make access to quality journalism more difficult and make it even easier to spread disinformation.

Liberties, EDRi and Access Now are working on issuing a shadow report in the coming weeks to provide a thorough human rights assessment of current policy considerations and make constructive recommendations. In the meantime, our full position is in our responses to the Commission’s public consultation.


On 13 November 2017, the European Commission launched a public consultation on “fake news” and “online disinformation”, which did not include a clear definition of “fake news”.

On 13 November 2017, the European Commission announced plans for a “high level expert group” on “fake news” without defining the subject or subjects in which the experts were expected to be experts in.

On 12 January, 2018, the European Commission appointed the 39 people to the Group, which included seven TV broadcaster representatives, but did not include the United Nations Special Rapporteur on Freedom of Expression and Opinion nor a digital rights-focused organisation.

On 12 March 2018, the European Commission published a Eurobarometer opinion survey where individuals were asked about their views on “fake news”. Respondents, however, were told that “news or information that misrepresent reality or that are even false” are called “fake news”. On the same date, the “High-Level Expert Group” presented its final report on the topic. The report would have benefited from more diversity among its membership. For example, we are concerned that the definition of “disinformation” they provide is too broad and relies on the intent rather than the actual effect of the “disinformation”. However, the report raises several key points that we welcome:

  • The High-Level Expert Group cast doubt on the methodology of the Eurobarometer survey, pointing out that “research has shown that citizens often associate the term ‘fake news’ with partisan political debate and poor journalism broadly, rather than more pernicious and precisely defined forms of disinformation.” This clearly indicates that asking about “fake news” in a survey very probably produced unreliable outputs.
  • The report states unequivocally that “censorship and online surveillance and other misguided responses that can backfire substantially, and that can be used by purveyors of disinformation in an “us vs. them” narrative that can de-legitimize responses against disinformation and be counter-productive in the short and long run. Attention should also be paid to lack of transparency and to the privatization of censorship by delegation to specific bodies/entities or private companies”.

On 18 March 2018, the European Data Protection Supervisor (EDPS) published an opinion on online manipulation and personal data which rightly points out that “fake news” is a “symptom of concentrated, unaccountable digital markets, constant tracking and reckless handling of personal data”.

On 26 April 2018, the European Commission published a Communication on fake news and online disinformation. As with previous initiatives on illegal or unwelcome content online, the European Commission fails to:

  • recognise that measures can backfire;
  • collect data to get early warnings of any such counterproductive effects; or
  • plan for measures to respond to any counterproductive effects.

Read more:

EDRi’s response to the public consultation for legal entities – “Fake news and online disinformation” (22.02.2018)

EU Could Kill Free Speech in Fight Against Fake News (12.03.2018)


26 Apr 2018

LEAK: British EU Commissioner: ID check & prior approval for online posts

By Joe McNamee

In a letter to Commissioner Mariya Gabriel obtained by EDRi1, the British European Commissioner, Sir Julian King, makes it clear that, not alone does he no longer find it acceptable that people should be able to communicate online without prior approval, he also objects to people communicating without being identified. Commissioner King is pushing the European Union towards an internet where freedom of expression is strangled by filtering and ID checks.


For the past year, Commissioner King and his services have been strongly pushing for “upload filtering (pdf)” – the automatic approval of all uploads in all formats before they are put online. The aim is to ensure that nothing that was previously removed on the basis of the law, or the arbitrary terms of service of an internet company, or that is or has been assessed as being unwelcome or illegal by a guess made by an artificial intelligence programme can be uploaded or re-uploaded to the internet. If the European Commission succeeds in getting this principle accepted by the European Parliament in the Copyright Directive (vote is scheduled for 20-21 June 2018), it plans to rush out new legislation to cover other forms of content within weeks. It seems that some Members of the European Parliament (MEPs) are already being lobbied to push for this.

Paradoxically, while the European Commission uses populist demands about “all parties” making “more efforts and faster progress” on removing “illegal” content, the Commission itself has no idea how many items of allegedly illegal content that were flagged by the EU police cooperation agency Europol led to an investigation or a prosecution – clearly showing a lack of a serious, diligent approach “from all sides”. “From all sides, except ours” might be more accurate.

ID Checks

Now, acting on his own initiative, Commissioner King has decided that “voluntary” identification (by companies that are eager to collect as much data about us as possible) is the next battle – this time in the fight against “online disinformation” (whatever that may mean) and to fight against abuse of data (collecting data as a way of avoiding collected data from being abused). Facebook’s “real-name policy” has previously caused demonstrable harm to vulnerable and marginalised groups.

In the letter, King proposes multiple ways of achieving this control – such as through the WHOIS database of domain name owners, through surveillance of IPv6 internet protocol numbers (the European Court of Human Rights ruled this week (pdf) that a court order is needed to gain access to IP address data), “verified pseudonymity”, and “other identification mechanisms”.

UK perspective

Coincidentally or not, the British Conservative government, which appointed Commissioner King, last week launched an attack on social media companies for not properly verifying the ages of children. Social media companies, which profit from exploitation of our data, are unlikely to be very unhappy about government pressure to gather still more personal data.

When not fighting “fake news”, the government that appointed Commissioner King allegedly spent more than one million pounds on negative Facebook adverts attacking the leader of the main UK opposition party during the 2017 general election. This is highly unlikely to be the kind of activity that Commissioner King is referring to when he talks about plans to “further limit the possibilities for using mined personal information for certain specific purposes, in particular political ones”.

EDRi will keep working to ensure that EU policy-makers respect the Charter of Fundamental Rights of the European Union when proposing any legislative or non-legislative action to privatise law enforcement functions.

1 While we know that the Financial Times, (paywalled) if not others, have obtained a copy of this letter, we are not aware of it having been made public before today.


24 Apr 2018

ePrivacy: Civil society letter calls to ensure privacy and reject data retention


On 23 April 2017, EDRi, together with other civil society organisations, sent a follow up to our previous open letter to the permanent representations of EU Member States in Brussels. The letter highlighted the importance of the ongoing reform of Europe’s ePrivacy legislation for strengthening individuals’ rights to privacy and freedom of expression and for rebuilding trust in online services, in particular in the light of the revelations of the Cambridge Analytica scandal.

Open letter to European member states on the ePrivacy reform

23 April 2018

Dear Minister,
Dear Member of the WP TELE,

We, the undersigned organisations, support the ongoing and much-needed efforts to reform Europe’s ePrivacy legislation. As we mentioned in our recent open letter, the reform is essential in order to strengthen individuals’ rights to privacy and freedom of expression across the EU and to rebuild trust in online services, in particular given the revelations of the Cambridge Analytica scandal.1

Despite the urgent need to protect the confidentiality of communications, we are aware of the political difficulties that were met during debates in Council and at Working Party level, specifically regarding Article 11 of the proposed ePrivacy Regulation.

Given these difficulties and following the recent publication of the full document WK 11127/2017,2 we would like to highlight a number of legal points that may help move the discussion forward:

– The Court of Justice of the European Union (CJEU) clarified, in two different judgements (Digital Rights Ireland – joined cases 293/12 and 594/12 and Tele2-Watson, joined cases C-203/15 and C-698/15), that mandatory bulk retention of communications data breaches the Charter of Fundamental rights. Any attempt to subvert CJEU case law by adding “clarity to the legal context” without a legal basis that respects the Charter is a direct attack on the most basic foundations of the European Union and should be dismissed. In fact, the current legal framework (the e-Privacy Directive, Directive 2002/58) provides legal clarity since mandatory retention of metadata for the purpose of prevention, investigation, detection or prosecution of criminal offences, as well as access to retained metadata for this purpose, is regulated in its Article 15(1).

– A Regulation aimed at protecting personal data and confidentiality of electronic communications would be deprived of its purpose if certain types of processing (“processing for law enforcement purposes”) are completely excluded from its scope. This was also noted by the Court of Justice in paragraph 73 of the Tele2-Watson judgment. Furthermore, such processing requires specific safeguards defined by the Court and must be necessary and proportionate.

– Finally, we have also noted certain attempts by a number of delegations to introduce a minimum storage period (of 6 months) for all categories of data processed under Article 6(2)(b). If approved, this would impose indiscriminate retention of personal data in a way that has already been ruled as unlawful by the Court of Justice of the European Union in Tele2/Watson. If Article 6(2)(b) establishes a legal basis for processing communications data in order to maintain or restore security of electronic communications networks and services, or to detect errors, attacks and abuse of these networks/services, the processing should still be limited to the duration necessary for this purpose. On top of this, the general principles of GDPR Article 5 should apply, e.g. storage limitation in Article 5(1)(e). If the technical purpose can be achieved with anonymised data, this is no justification for processing data for identified or identifiable end-users. Setting a minimum mandatory retention period for communications data processed under Article 6(2)(b) will mean weakening the level of protection guaranteed under the GDPR, which is not only unacceptable but also contradictory to the concept of lex specialis.

We are aware of the political difficulties raised in Council around the issue of data retention, however the clarity provided by the CJEU in two landmark rulings on that matter can not and must not simply be ignored. We strongly encourage you to keep in mind all of the legal points above in the ongoing debates. We count on the Council to swiftly conclude a general approach on the ePrivacy Regulation, which should include a legally sound Article 11 rooted in respect for the EU Charter and the CJEU case law, to provide law enforcement authorities with the legal certainty needed to accomplish their duties.3

Yours faithfully,

European Digital Rights




Privacy International


IT-Political Association of Denmark and


23 Apr 2018

EU Council Presidency rushes to impose new copyfails in the EU

By Diego Naranjo

The discussions on the Censorship Machine proposal (a.k.a. upload filters) in the EU has suddenly speeded up. The next meeting of the Committee of the Permanent Representatives of the Governments of the Member States to the European Union (COREPER) and the Bulgarian Council Presidency is on 27 April. The Presidency published a set of very biased questions to guide the discussions. On 13 April their latest consolidated, and predictably dreadful, compromise proposal were made available.

In an unusual move in EU politics, the Bulgarian Presidency plans to push the EU Council to adopt its negotiating position (“general approach”) before the European Parliament (EP) adopts its position. The intention of such a manoeuvre cannot be understood in any other way than as an attempt to pressure the EP (where the debate on upload filters and ancillary copyright are progressing very fast) to adopt a position that is anti-citizen, to the benefit of the few large internet companies that could cope with the onerous new obligations.

Despite it being well known that there is no consensus in the Council, and following a series of letter from dozens of organisations to improve the texts (this one signed by over 80 organisations, this one signed by us and other 49 NGOs, and another one signed by 56 respected academics), it seems that the Bulgarian Presidency has no time to listen to anyone, but there is no good reason for this to happen in this way.

In order to achieve the best outcome in the copyright reform, EU Member States need to stand against this unnecessary pressure from the Council Presidency, and the European Parliament needs to stand firm and keep debating the proposal before the vote on 20-21 June in the Legal Affairs (JURI) Committee. There are enough copyfails in the EU to get this reform wrong as well.

Stop the #CensorshipMachine! (10.04.2018)

#CensorshipMachine – How will the decision be taken? (19.03.2018)

Proposed internet filter will strip citizens of their rights: Your action is needed! (28.03.2018)

Copyright reform: Document pool

EU Member States Are Getting Steamrolled on © by the Bulgarian Presidency


23 Apr 2018

Civil society urges Portuguese telecom regulator to uphold net neutrality


On 23 April 2018, 13 civil society organisations submitted a complaint to the Portuguese regulator on one of the most extreme net neutrality violations in Europe, urging them to use their authority to prohibit so-called zero-rating offers.

Portugal features the worst net neutrality violations we have seen in Europe to this day. It is hard to imagine how an independent regulator cannot find those offers in violation of EU law.

said Thomas Lohninger, Executive Director of, a member organisation of European Digital Rights (EDRi). “In this complaint, we present legal and economic evidence that, by all criteria of the EU net neutrality rules, these products should be prohibited”, he added.

The European Union (EU)’s net neutrality rules protect European citizens’ right to a free and open internet. They came into effect in April 2016. The Body of European Regulators for Electronic Communications (BEREC) laid down guidelines to clarify certain aspects. Despite the protections these guidelines offer, we have witnessed a dramatic increase in net neutrality violations in Europe, particularly zero-rating offers. This practice makes using certain applications more expensive than others. To date, very few regulators in the EU have decided to intervene against such offers, despite having authority to do so.

In Portugal, the three largest mobile operators, MEO, Vodafone and NOS, hold a combined market share of more than 95%. All of them offer zero-rating products that give preferential treatment to dominant internet companies like Facebook and Google. This is while the country ranks among the worst in Europe when it comes to the price and availability of mobile data download capacity, and the zero-rating offers in question are far cheaper than any other data volume a Portuguese citizen can buy. The telecom companies decide in an intransparent process which internet services are included in the zero-rating offer. The operator can exclude services from the offer at any time without being accountable to their customers.

In March 2018 the Portuguese regulator ANACOM finally decided to start a formal assessment of this offer and came to the conclusion that the telecom companies offering these products are allowed to continue. ANACOM’s draft decision is now subject to a consultation, and 13 civil society organisations (NGOs) have written a joint submission to urge the regulator to change its position. As part of this submission the NGOs present statistical evidence based on data from the European Commission that zero-rating has a detrimental effect on the price of internet access. “In general, prices for mobile data volume in Europe fell by 8% from 2015 to 2016, except in markets where zero-rating products are offered. There, prices increased by 2%”, said Thomas Lohninger.

The EU protections for net neutrality will soon undergo a reform. BEREC is currently conducting a public consultation on the guidelines for national regulators it passed in 2016, and the European Commission is expected to publish its evaluation of the underlying regulation by April 2019. EDRi and its member organisations will continue to fight for net neutrality, sharing its analysis with regulators, legislators, and the courts.

Submission of 13 civil society organisations to the Portuguese regulator (19.04.2018)

Portuguese ISPs given 40 days to comply with EU net neutrality rules (07.03.2018)

Video: Net Neutrality Enforcement in the European Union (29.12.2017)

Net neutrality wins in Europe! (29.08.2016)

BEREC Guidelines on the Implementation by National Regulators of European Net Neutrality Rules (30.08.2016)

Zero rating: Why it is dangerous for our rights and freedoms (22.06.2016) campaign (2013-2016)


* This press release was updated to change “To date, not a single regulator in the EU has…” to “To date, very few have decided to intervene against such offers, despite having authority to do so”.

18 Apr 2018

Hermes Center demands investigation of NAT-related data retention

By Hermes Center

On 27 March 2018, EDRi member Hermes Center for Transparency and Digital Human Rights filed a request with the Italian Data Protection Authority (DPA) to investigate on the widespread practice of logging Network Address Translations (NAT) by most of the telecommunication operators.

To better understand the issue, we must first study, from a technical point of view, the operation and allocation of IP addresses by telecommunications companies, in particular, the practice of Carrier-Grade NAT (CGN), an approach used by telecommunications companies – and especially mobile operators – to manage the allocation of IPv4 addresses. Due to the shortage of available IPv4 addresses, it has become necessary to assign private IP addresses to customers, and then translate them into public IP addresses through a NAT procedure performed by devices connected to the internet operator network. In this way, a single public IP address can shield several private IP addresses: the direct identification of the unequivocal user that on “that day and at that time” was assigned to that internet identifier — similar to telephone numbers identification — is more difficult.

According to the statements of law enforcement authorities (LEA), this practice complicates the operations of identification of those who commit crimes because, given a public IP address, there may be dozens of different users. A practice widely used by telecommunication operators to deal with requests for identification by the judicial authority is that of recording and storing all NAT operations between private IP addresses of its customers and public IP addresses: like this, all the connections of the various IP addresses to the internet are recorded.

The Hermes Center demanded that the Italian Data Protection Authority perform a timely verification and inspection of all the main mobile and fixed operators in relation to the practices of data collection of internet traffic, publicly reporting the results, to verify which is the information collected for the purpose of providing compulsory services to the judicial authorities.

----------------------------------------------------------------- Support our work - make a recurrent donation! -----------------------------------------------------------------

A recently introduced Italian law on data retention has extended the retention time period by telecoms providers by up to six years. This data retention concerns both phone traffic and internet connections and clearly goes against the European data retention principles.

On 13 October 2017, Europol and the Estonian Presidency of Council of European Union organised a workshop with 35 policy-makers and law enforcement officials from all around Europe, in order to discuss the “increasing problem of non-crime attribution associated with the widespread use of Carrier Grade Network Address Translation (CGN) technologies by companies that provide access to the internet”.

The Hermes Center filed a Freedom of Information (FOI) request to Europol and the documents are available here: In Italy, the Hermes Center has appealed to the Data Protection Authority, asking for inspection across all telecommunication operators in order to verify in great details which are the exact information elements logged to comply with data retention laws.

Italy extends data retention to six years (29.11.2017)

Europol’s FOIA on data retention with carrier grade NAT (22.01.2018)

Documents related to the Hermes Center’s FOI request to Europol

(Contribution by Riccardo Coluccini, EDRi-member Hermes Center for Transparency and Digital Human Rights, Italy)



18 Apr 2018

Internet protocols and human rights

By Guest author

Recently, a lot of thought has been devoted to the issue of human rights and internet protocols.

Internet protocols in this context are not about content. Using email as an example, internet protocols define how your computer or device locates and communicates with your email service and how that email service locates and communicates with other email services. They define how one email service may authenticate another email service. They are not about what you put in your mail. It is about how the technology works.

----------------------------------------------------------------- Support our work - make a recurrent donation! -----------------------------------------------------------------

However, even if protocols are agnostic about what is being communicated, they can sometimes make more data than necessary visible to intermediaries. Any use of the internet requires services to be located. One current area of interest concerns how much information you need to give, and when, to locate another service. As an example, imagine going to a medical appointment. You do not start by asking about how to get to a specific department in the hospital. You start by asking how to get to a particular metro station, and when there, how to get to the hospital – and only within the hospital do you seek directions to the specific department. There is work going on to replicate that style within core internet services. For more information, see the presentation on Domain Name System (DNS) and privacy by Sara Dickinson of Sinodun at Afnic’s Open Day in 2017:

The Internet Engineering Task Force (IETF) is the place where internet protocols are defined. It is probably the most important standardisation body regarding the internet. It does not deal with the “whole internet”, though. Its focus is on the Internet Protocol (IP) layer. It does not deal with user interfaces or content. It does not deal with what happens in particular computers or devices. It does not develop the lower layer technologies, such as Bluetooth, WiFi, 5G and the like. But it does define many of the protocols that allow machines and networks to interoperate.

The IETF met in London from 18 to 23 March 2018. There were over 1200 participants. These included people from all the major technology players, EDRi member Article 19, Europol, the US National Security Agency (NSA), and more.

The IETF is an open evolving community of engineers that basically produces advice for engineers. The “advice” tends to be in the form of Requests for Comments ( RFCs) of which there are now over 8000. They come in various flavours from ”internet standards” to “informational” to “jokes”. The “joke” RFCs are important. They are fun, help maintain the sense of community, and they also stop organisations simply saying they follow “all IETF RFC’s” in their products.

The IETF is unlike many other standardisation bodies. It is very open. There is no membership fee and no membership list, which in turn means no community-wide voting. There are no significant barriers to participation; all people really need is the ability to work on a mailing list. Even for the big meetings more and more people are participating remotely. Decisions, however, are then “made on the list”. People participate because they find it useful; people use the standards because they find them useful.

Over the years the IETF community has taken positions on how to deal with property rights claims in developing standards, and on how to take into account security and privacy considerations – including work on the permanence of identifiers, and on law-enforcement interception and security agency mass surveillance.

The IETF also has a sister organisation: the Internet Research Task Force (IRTF) which met in London, the same venue and time. The two communities overlap to a significant extent. While IETF Working Groups focus on the shorter term issues of engineering and standards, IRTF Research Groups focus on longer-term research issues.

While some Research Groups are clearly technical there are at least two with clear policy dimensions.

Many people believe that internet access should be considered a basic human right. The Global Access to the Internet for All Research Group (GAIA) addresses the challenge of the growing digital divide between those with functional access to the Internet and those who simply cannot afford access. One of their objectives is to develop a longer-term perspective on IETF standardisation efforts and this could include recommendations to protocol designers and architects.

The Human Rights Protocol Considerations Research Group (HRPC) is chartered to research whether standards and protocols can enable, strengthen or threaten human rights, including the right to freedom of expression and the right to freedom of assembly.

Everything about these two Research Groups, well everything except the coffee and cookies, is available online – their charters, their working documents, their mail archives, instructions on how to join the mailing lists and all the meeting materials: and You are welcome to participate.

Global Access to the Internet for All Research Group (GAIA)

The Human Rights Protocol Considerations Research Group (HRPC)

Video: Domain Name System (DNS) and privacy

(Contribution by Gordon Lennox, Technologies, Droits, Responsabilités, Société association – TDRS and EDRi Advisory Board Member)



18 Apr 2018

Fighting for migrants’ data protection rights in the UK

By Guest author

Since 2014, the United Kingdon (UK) government has steadily rolled out policies to make the country a “hostile environment”  for migrants, in the words of Prime Minister Theresa May.

----------------------------------------------------------------- Support our work with a one-off-donation! -----------------------------------------------------------------

This has involved turning various ordinary institutions into border protection agencies. Banks have to collect and supply data to the Home Office (the UK’s interior ministry) on their customers’ immigration status. Landlords are required to check immigration documents before rental. Schools were checking pupils’ nationality and also sharing information with the Home Office, before a boycott campaign put an end to the practice in April 2018. Hospitals, too, must process immigration paperwork before they can deliver any non-urgent treatment. The police, in some regions, are piloting a handheld biometric ID device that instantly gives street officers access to an immigration database.

In the “hostile environment”, migrants are losing the right to live free of pervasive monitoring. They’re also losing the right to basic data protection. This is particularly evident in the case of a data-sharing agreement between the National Health Service (NHS), the Department of Health, and the Home Office. This agreement, established through a Memorandum of Understanding (MoU) in late 2016, without any consultation of professionals or the public, allows immigration enforcement officers to request patient data held by NHS Digital, the database manager for public health in the UK.

The Migrants’ Rights Network (MRN) has been at the forefront of civil society responses to this scheme. MRN, together with Doctors of the World UK, Docs not Cops (a group of professionals resisting the implementation of “hostile environment” measures in the health sector), and civil rights organisation Liberty, argues that sharing data between health services and immigration control officers violates migrants’ fundamental right to patient confidentiality. Such a breach of fundamental privacy rights is all the more worrying that the Home Office has error margins of 10 percent in its decisions to target “immigration offenders” – meaning they would routinely request data for the wrong individuals.

Crucially, introducing the possibility that health services might hand over patient data to the Home Office will make many vulnerable migrants afraid to seek care. This is already a reality. During a parliamentary hearing in January 2018, elected representatives heard the tragic story of an undocumented domestic worker who avoided treatment out of fear that she could be deported, and died of otherwise preventable complications.

MRN argues that such a situation dismantles the very principles of public health, starting with duty of care and public trust in health providers. The Home Office and NHS Digital have denied this, and argue that data-sharing for immigration enforcement is “in the public interest.” Yet the only other reason NHS Digital normally supplies confidential patient data to the Home Office is in the case of serious crime, such as child abuse or murder. By putting immigration and serious crime on a similar level, this data-sharing arrangement contributes to the dramatic criminalisation of undocumented existence (already exemplified in everyday language by the expression “illegal migrant”).

The UK Parliament’s Health Committee and the British Medical Association have both asked for data-sharing to stop. The Home Office have responded by saying they need to gather more evidence of the scheme’s impact, which could take more than a year. MRN believe this is unacceptable, as lives are currently at risk. MRN is thus challenging the data-sharing agreement in court. The organisation has obtained permission for judicial review (after appeal), likely to take place during the summer 2018, and is currently raising funds to cover its potential court costs.

MRN’s legal challenge is rooted in a desire to protect public health principles and vulnerable lives, but it also has broader implications for data protection in the UK. It aims to send a clear signal that data rights cannot be stripped on the basis of nationality. This is absolutely crucial at a moment when the UK’s latest data protection law, currently being debated in Parliament, includes an exemption clause for immigration enforcement, which would prevent migrants from exercising their full rights under the EU General Data Protection Regulation (GDPR). MRN thus hopes to set a positive precedent for judicial activism on these matters, and make a strong case for non-discrimination as a pillar of data justice.

Against Borders for Children campaign: We won! DfE are ending the nationality school census!

Crowdjustice fundraiser: Stop data-sharing between the NHS and the Home Office

Making the NHS a ‘hostile environment’ for migrants demeans our country (24.10.2017)

‘Hostile environment’: the hardline Home Office policy tearing families apart (28.11.2017)

NHS accused of breaching doctor-patient confidentiality for helping Home Office target foreigners (09.11.2017)

Migrants’ Rights Network granted permission for judicial review of patient data-sharing agreement between NHS Digital and the Home Office (01.03.2018)

MRN legal challenge against NHS data-sharing deal (29.11.2017)

(Contribution by Fabien Cante, LSE Media & Communications / Migrants’ Rights Network, the United Kingdom)


18 Apr 2018

Privacy at ICANN: WHOIS winning?

By Guest author

The Internet Corporation for Assigned Names and Numbers (ICANN) has struggled over the publication of the name, address, phone number, and email address of domain name registrants since its inception in 1998. That registry is called WHOIS.

----------------------------------------------------------------- Support our work - make a recurrent donation! -----------------------------------------------------------------

WHOIS might have worked well during the 1980s when only a few researchers had domain names, but now it exposes millions of individuals to harassment and spam. So far, neither the efforts of civil society who volunteer at this multi-stakeholder organisation (notably the Noncommercial Users Constituency), nor the repeated interventions of the Data Commissioners of the world have had a lot of impact. However, there is a huge struggle going on now over compliance with the European General Data Protection Regulation (GDPR). Registrars who collect registrant data and provide it according to their contracts with ICANN have obtained legal advice that indicates they are vulnerable to significant fines.

ICANN continues to try to maintain a registrant directory that permits the continued access of many third parties, notably law enforcement agencies, trade mark and copyright holders, and private sector cybercrime investigators and reputational “blacklisters”. There has been a flurry of activity to address long-neglected privacy rights, and CEO Goran Marby has been asking for advice from the Article 29 Working Party. They answered on 11 April 2018 in a letter which was quite clear about ICANN’s failure to comply.

According to the Non-Commercial Stakeholder Group (NCSG), key issues that remain are:

  1. There is no multistakeholder process at the moment, and in recognition of the work which was going on in the WHOIS policy development process has been temporarily suspended. The CEO and the Board will make a decision, claiming it to be based on advice from the Article 29 Working Party and on “community input”. That interim policy is good for a year, during which time the community can propose changes, through a normal policy development process. Once the year is over (and the process takes a couple of months in itself to vote through a policy) the interim policy will become the final policy unless there is an agreed replacement. Given the recent history of the Registration Directory Services Policy Development Process (RDS PDP), it is highly unlikely that consensus to change the interim solution in less than a year would be achieved. This appears to be abandonment of the multi-stakeholder process, and requires close scrutiny. A multi-stakeholder process needs to remain in place to reach some kind of consensus on the biggest policy debate that ICANN has confronted in its history.
  2. The purpose of the collection, use and disclosure of registrant data is being construed to include feeding the third party actors who have always had free access to the data (in the NCSG view, often illegally).
  3. The issue of public safety and consumer protection as a reason to permit widespread access to data is unsupported by recent accurate data.
  4. The risks to individuals and small organisations have never been measured.
  5. The proposed tiered access model depends for its efficacy on a serious accreditation process. Because there is no time to develop one before 25 May, of the day the General Data Protection Regulation becomes law, an interim self-accreditation process is proposed. There may not be an appetite to work on proper standards that engage the data protection authorities, and the interim solution will not simply expose individuals to marketing, domain expropriation, spam, and risk from political adversaries. Self-accreditation risks setting up an anti-competitive regime where registrant data is held by dominant players.
  6. ICANN is still not clear as to whether it regards itself as a data controller, although a long-serving member of the ICANN community challenged them publicly on this matter at ICANN61 meeting in March 2018.It has also thus far refused to appoint a privacy officer for any registrant data related issues. What is clear to the NCSG is that ICANN is the only contracting party who has access to all escrowed data of registrants, and that they set the terms for that escrow arrangement. They also set the terms for the contracts with registries and registrars, and enforce their compliance through the Global Domains Division (compliance branch). It is worth noting that one of the recommendations of the business community proposal is that ICANN must retain access to all registrant data at all times, whatever the solution selected.
  7. For those not following the GDPR closely, the issue of who is the controller may be extremely important in terms of liability.
  8. NCSG is working on a standards development project led by a University of Toronto team, to develop proper accreditation standards for third parties to whom personal data is released by data controllers and processors. There must be strong management practices in place to ensure that the entities asking for the data are indeed who they say they are, and that their purported reasons to request the data are legitimate, limited, and proportionate. There should also be standards to ensure proper safeguarding and eventual destruction of the data, and access rights for individuals, as well as transparency except in exceptional circumstances. The Article 29 Working Party released a paper in February detailing their expectations and their own involvement in the accreditation of various processors under the GDPR; this standards proposal is working in the same vein, to explore what best management practices look like.

Working Paper International Working Group on Data Protection in Telecommunications

Working Paper on Privacy and Data Protection Issues with Regard to Registrant data and the WHOIS Directory at ICANN (27-28.11.2017)

Non-Commercial Stakeholder Group (NCSG) Positions on Whois Compliance with GDPR (16.04.2018)

ICANN: Data Protection/Privacy – Latest Announcements, Updates & Blogs

ICANN Receives Data Protection/Privacy Guidance from Article 29 Working Party (12.04.2018)

(Contribution by Stephanie Perrin, University of Toronto, NCSG Councilor)



18 Apr 2018

Cambridge Analytica access to Facebook messages a privacy violation

By Gemma Shields

Less than one month after Cambridge Analytica Whistleblower Christopher Wiley exposed the abuse of (so far) 87 million Facebook users’ data, Facebook Co-Founder, Chairman, and CEO Mark Zuckerburg testified before the US Congress.

----------------------------------------------------------------- Support our work with a one-off-donation! -----------------------------------------------------------------

On 10 and 11 April, Zuckerberg provided testimony in a joint hearing of the Senate Judiciary and the Senate Committee on Commerce, Science, and Transportation, and then to the House Energy and Commerce Committee. He faced questions on a number of democracy-disrupting and privacy-violating issues to which the social media giant has been a party, not least the composition – and use – of personally identifiable data as part of the Facebook-Cambridge Analytica scandal.

This scrutiny gave rise to uncertainty over what Facebook user data Cambridge Analytica had access to, and of just what this personal data comprised. What began as the personality app “This is Your Digital Life”, designed by researcher Aleksander Kogan and installed by 270 000 Facebook users (which in turn provide access to the data of at least 87 million users), resulted in data consulting firm Cambridge Analytica having access to the private inbox messages of users.

This revelation, whilst a part of the unfolding exposé, was confirmed in the notifications that began appearing at the top of users News Feeds which read “a small number of people who logged in to ‘This is Your Digital Life’ also shared their own News Feed, timeline, posts, and messages which may have included posts and messages from you.”

With a global reach, the scandal has implications for users worldwide. In the European Union, such access to personal data would be prohibited by the proposed ePrivacy Regulation. Current ePrivacy rules on access to the content of communications do not cover Facebook, although this would change under the proposed ePrivacy Regulation.

So far, lobbyists from Facebook and its allies have lobbied Member States in the EU Council successfully to slow down the adoption of the new Regulation – and not even this scandal has been able to persuade EU Ministers (many of whom signed a letter arguing that our fundamental rights should be “balanced” with “digital products and services” of the need that Facebook’s access to private communications needs to be restricted.

On how such abuse could happen, a Facebook spokesperson said: “In 2014, Facebook’s platform policy allowed developers to request mailbox permissions but only if the person explicitly gave consent for this to happen. At the time when people provided access to their mailboxes – when Facebook messages were more of an inbox and less of a real-time messaging service – this enabled things like desktop apps that combined Facebook messages with messages from other services like SMS so that a person could access their messages all in one place. According to our records only a very small number of people explicitly opted into sharing this information. The feature was turned off in 2015.”

Conditions for consent – as per Article 7 of the General Data Protection Regulation (GDPR) – cannot have been met, however, and in particular, the explicit consent of 87 million users to access to and repurposing of their personal data has not been obtained.

Users can check if their personal data was harvested and misused by Cambridge Analytica here:

Transcript of Zuckerberg’s appearance before the House committee (11.04.18)

Facebook scandal: I am being used as scapegoat – academic who mined data (21.03.18)

Revealed: Aleksandr Kogan collected Facebook users’ direct messages (13.04.18)

Cambridge Analytica Could Have Also Accessed Private Facebook Messages (04.10.18)

How can I tell if my info was shared with Cambridge Analytica?

(Contribution by Gemma Shields, EDRi intern)