23 Oct 2019

Austrian Passenger Name Records complaint – the key points

By Epicenter.works

Austrian EDRi member epicenter.works filed a complaint with the Austrian data protection authority (DPA) about the Passenger Name Records (PNR) in August 2019, with the aim to overturn the EU PNR Directive. On 6 September, the DPA rejected the complaint, which was a good news, because that was the only way to lodge a complaint to the Federal Administrative Court.

The complaint: Objections

Epicenter.works’ complaint about the PNR system to the Federal Administrative Court contains a number of objections. The largest and most central one concerns the entire PNR Directive itself. The Court of Justice of the European Union (CJEU) has already repeatedly declared similar mass surveillance measures to be contrary to fundamental rights, for example in the case of data retention or in the expert opinion on the PNR agreement with Canada.

A complaint can’t be directly lodged to the CJEU, but the Administrative Court must submit questions on the interpretation of the law to the CJEU, as epicenter.works suggested in the complaint. The first question suggested is summarised as follows: “Does the PNR Directive contradict the fundamental rights of the EU?”

Moreover, Austria has not correctly implemented the PNR Directive, has partially extended its application, and has not implemented important restrictions from the Directive. For example, the Directive obliges all automatic hits, for example when someone is identified as a potential terrorist, to be checked by a person. This has not been implemented in the Austrian PNR Act. The question to the CJEU proposed in the complaint is therefore: “If the PNR Directive is valid in principle, is the processing of PNR data permitted even though the automatic hits do not have to be checked by a person?”

Where the Austrian PNR Act goes beyond the Directive, epicenter.works suggests that the Court should request the Constitutional Court to repeal certain provisions.

The Austrian PNR Act goes further than the Directive

According to the PNR Directive, PNR data may only be processed for the purpose of prosecuting terrorist offences and certain serious criminal offences. These serious crimes are listed in an annex to the Austrian PNR Act, which are directly translated from the PNR Directive. However, some of these crimes do not have an equivalent crime in Austrian law, leaving the entire provision unclear. Because of this flaw, the complaint asks the Constitutional Court to repeal this provision of the PNR Act. The list of terrorist offences in the PNR Act also goes much further than the Directive.

The PNR Directive only requires EU Member States to record flights to or from third countries, leaving the recording of intra-EU flights optional for Member States. Many countries have also extended this to domestic flights. In Austria, the Minister of the Interior can do this by decree without giving any specific reason. The complaint suggests that the Constitutional Court should delete this provision, because it has a strong impact on the fundamental rights of millions of people — without any justification of its necessity or proportionality.

Finally, the PNR Act also provides for the possibility for customs authorities and even the military to have access to PNR data. This is neither provided for in the PNR Directive, nor necessary for the prosecution of alleged terrorist and those suspected of serious crimes, and therefore it’s an excessive measure. Here, too, the complaint suggests that the Constitutional Court should delete the provisions that give these authorities access to PNR data.

epicenter.works
https://en.epicenter.works/

Our PNR complaint to the Federal Administrative Court
https://en.epicenter.works/content/our-pnr-complaint-to-the-federal-administrative-court

PNR: EU Court rules that draft EU/Canada air passenger data deal is unacceptable (26.07.2017)
https://edri.org/pnr-eu-court-rules-draft-eu-canada-air-passenger-data-deal-is-unacceptable/

Why EU passenger surveillance fails its purpose (25.09.2019)
https://edri.org/why-eu-passenger-surveillance-fails-its-purpose/

Passenger surveillance brought before courts in Germany and Austria (22.05.2019)
https://edri.org/passenger-surveillance-brought-before-courts-in-germany-and-austria/

(Contribution by EDRi member epicenter.works, Austria)

close
23 Oct 2019

The sixth attempt to introduce mandatory SIM registration in Romania

By ApTI

A tragic failure by the police to save a teenage girl who was abducted but managed to call the 112 emergency number three times before she was murdered, led to the adoption of a new Emergency Ordinance in Romania. The law introduces several measures to improve the 112 system, one of which is mandatory SIM card registration for all prepaid users. Currently approximately ten million prepaid SIM cards are used in Romania.

This is the sixth legislative attempt in the last eight years to pass legislation for registering SIM card users despite a Constitutional Court decision in 2014 deeming it illegal. The measure was adopted through a fast legislative procedure and is supposed to enter into effect on 1 January 2020.

It seems like the main reason to introduce mandatory SIM card registration is that authorities want to localise the call to the emergency number and punish false emergency calls. However, this measure is not likely to be efficient for the purpose, as anyone who buys a SIM card could obviously give it to someone else. Another reason is to identify the caller in real emergency situations, to be able to more easily locate them and send help.

Romania is one of the few countries in the European Union where calling the emergency number without a SIM card is not possible. This has been a deliberate decision taken by Romanian authorities to limit the number of “non-urgent” calls.

What happened?

After the Emergency Ordinance was proposed, EDRi member ApTI, together with two other Romanian NGOs, launched a petition to the Ombudsman and the government calling for this law not to be adopted. After civil society’s calls for a public debate, the Ministry of Communications organised an oral hearing in which the participants were given no more than five minutes to express their views, without the possibility to have an actual dialogue. The Emergency Ordinance was adopted shortly after the hearing, despite the fact that the Romanian Constitution explicitly states that laws which affect fundamental rights cannot be adopted by emergency ordinances (Article 115 of the Romanian Constitution).

What did the court say in 2014?

In 2014, the Constitutional Court held that the “retention and storage of data is an obvious limitation of the right to personal data protection and to the fundamental rights protected by the Constitution on personal and family privacy, secrecy of correspondence and freedom of speech” (para. 43 of Decision nr. 461/2014, unofficial translation). The Court explained that restricting fundamental rights is possible only if the measure is necessary in a democratic society. The measure must also be proportionate, and must be applicable without discrimination and without affecting the essence of the right or liberty.

Collecting and storing the personal data of all citizens who buy prepaid SIM cards for the mere reason of punishing those who might abusively call the emergency number seems like a bluntly disproportionate measure that unjustifiably limits the right to private life. At the same time, such a measure inverses the presumption of innocence and automatically assumes that all prepaid SIM card users are potentially guilty.

What’s the current status?

The Ombudsman listened to civil society’s concerns, and challenged the Ordinance at the Constitutional Court. Together with human rights NGO APADOR-CH, ApTI is preparing an amicus curiae to support the unconstitutionality claims.

In the meantime, the Ordinance moved on to parliamentary approval and the provisions related to mandatory SIM card registration were rejected in the Senate, the first chamber to debate the law. The Chamber of Deputies can still introduce modifications.

Asociatia pentru Tehnologie si Internet (ApTI)
https://www.apti.ro/

Petition against Emergency Ordinance on mandatory sim card registration (only in Romanian, 12.08.2019)
https://www.apti.ro/petitie-cartele-prepay-initiativa2019/

ApTI’s response to the public consultation on Emergency Ordinance on mandatory SIM card registration (only in Romanian, 21.08.2019)
https://www.apti.ro/raspuns-apti-inregistrare-prepay-112

Constitutional Court decision nr. 461/2014 (only in Romanian)
https://privacy.apti.ro/decizie-curtea-constitutionala-prepay-461-2014/

Timeline of legislative initiatives to introduce mandatory SIM card registration (only in Romanian)
https://apti.ro/Ini%C5%A3iativ%C4%83-legislativ%C4%83-privind-%C3%AEnregistrarea-utilizatorilor-serviciilor-de-comunica%C5%A3ii-electronice-tip-Prepay

(Contribution by Valentina Pavel, EDRi member ApTI, Romania)

close
23 Oct 2019

EU Commissioners candidates spoke: State of play for digital rights

By Ella Jakubowska

On 1 November 2019, the new College of European Commissioners – comprising 27 representatives (one from each EU Member State), rather than the usual 28, thanks to Brexit – are scheduled to take their seats for the next five years, led by incoming President-elect, Ursula von der Leyen.

A leading role in Europe’s digital future

EU Commissioners are a powerful bunch: as the executive branch of the European Union – complementing the European Parliament and the Council of the European Union as legislators, and the Court of Justice of the European Union (CJEU) as judiciary – the College’s wide-ranging responsibilities cover EU policy, law, budget, and “political and strategic direction”. With digitalisation an issue that transcends borders, the choice of Commissioners could have an impact on digital rights across the world.

Between 30 September and 8 October 2019, the Commissioners-designate underwent marathon confirmation hearings in the European Parliament. These hearings give the EU elected representatives (Members of the European Parliament, MEPs) an opportunity, before voting on the Commissioners-designate, to ask them questions about their capacities and potential priorities if elected. Among the three that did not make the cut was France’s nominee for Internal Market, Sylvie Goulard, whose late-stage rejection may delay the start of the new Commission.

A shared task to update Europe for the digital age

Five of the incoming Commissioners’ portfolios are predicted to have a significant influence on digital policy. Carved up by President von der Leyen, their overlapping responsibilities to make Europe fit for the digital age could make or break citizens’ rights to privacy, data protection and online freedoms in general:

  • Sweden’s Ylva Johansson, Commissioner-designate for Home affairs, will inherit a portfolio including cybercrime, terrorist content Regulation and issues relating to privacy and surveillance. Whilst her hearing was relatively light on digital questions, it was certainly heavy in evasive answers. Her insistence on fundamental rights was a good start, but her call for compromise between security and privacy fell into the age-old myth of the two rights as mutually exclusive.
  • Belgium’s Didier Reynders, Commissioner-designate for Justice and Consumers, championed rights by committing to enforce the General Data Protection Regulation (GDPR) to its fullest extent. On Artificial Intelligence (AI) and data protection, he promised swift law, safety, trust, transparency, and for those making or judging the law to better understand the impacts of algorithmic decisions. He cited plans for a collective redress position in November.
  • No-longer-Commissioner-designate Sylvie Goulard, of France, lost her chance to oversee the Internal Market. Although Goulard pitched increased digital education and maintaining the EU’s data policy leadership, Members of the European Parliament (MEPs) were far more concerned with Goulard’s past. Accusations of impropriety in her former role as a French defence minister, and high earnings as a private consultant in office, led MEPs to conclude that she lacked the integrity to be a Commissioner. Update: Thierry Breton has been appointed as the new Commissioner-designate for the Internal Market. (7 November 2019)
  • The Czech Republic’s Věra Jourová (current Commissioner for Justice) made her case as Commissioner-designate for Values and transparency. Democracy, freedom of expression and cracking down on disinformation were key topics. Despite an understated performance, she called Europeans “the safest people on the planet.” She is right that GDPR sets a strong global standard, but it has faced a rocky implementation, and as of today requires further efforts to ensure the harmonisation that the Regulation prescribed.
  • Last was Denmark’s Margrethe Vestager for Executive Vice-President for a Europe fit for the digital age, and continuing as Competition Commissioner. Her anti-Big-Tech, “privacy-friendly”, pro-equality, redistribution agenda was well received. She faced questions about breaking up Big Tech, leaving it on the table as a “tool” of last resort but emphasising her desire to exhaust other avenues first. But she stumbled when it came to accusations that her aspirations to rein in Big Tech are incompatible with her remit as leader of the EU’s digital affairs.

The implications on digital policy

Throughout the hearings, the Commissioners-designate made many commitments, emphasised their policy priorities, and shared their plans for the future. Although we do not know exactly how this will translate to concrete policy, their hearings give valuable insight into how the new College intend to tackle rights challenges in the online environment. This is not an exact science, but we invite you to join us – and our “rightsometer” – to speculate about what impact the nominees’ ideas will have on citizens’ digital rights over the next five years, based on what the nominees did (and did not) say.

Privacy
Key legislation: ePrivacy

The currently stalled ePrivacy Regulation was unsurprisingly raised by MEPs – and, reassuringly, Vestager shared that “passing ePrivacy” needs to be “a high priority”.

Result: with Vestager’s support, it is a cautiously optimistic 3/5 on the rightsometer – but the troubled history of the Regulation also warns us not to be too hopeful.

Platform power
Key legislation: E-Commerce Directive (ECD), slated to be replaced by the upcoming Digital Services Act (DSA)

Vestager was the champion of regulating Big Tech throughout her hearing, proposing to redress the balance of power in favour of citizens, and giving consumers more choice about platforms. But she later confessed to uncertainty around the shape that the DSA will take, saying that she needs to “take stock” before committing to a position on E-Commerce. Jourová committed to redress in the event of wrongful takedown of content, and emphasised her strong support for the DSA. However, she suggested her intention to explore platform “responsibility” for illegal content, a move which would threaten myriad human rights.

Result: the rightsometer gives an inconclusive 2.5/5, with commitments to strengthening Big Tech regulation promising, but risks of unintended consequences of some of their ideas remaining a big concern.

Disinformation
Key document: Code of Practice on Disinformation

Jourová committed to tackling the problem of online disinformation, promising to bring in codes of conduct for platforms; to make it clear where political advertisements come from, and by whom they are funded; as well as enforcing “rules” for political campaigning.

Result: it’s a positive 4/5, and we encourage Jourová to analyse the risks of targeted political advertising and the online tracking industry caused by dysfunctional business models. However, a cautionary approach is needed (see Access Now, EDRi and Liberties Guide on Disinformation).

Law enforcement and cross-border access to data
Key legislation: “e-Evidence” proposal

Under direct questioning from MEP Moritz Körner about plans to advance e-Evidence, Commissioner-designate Johansson declined to provide a reply. She also insinuated that fundamental rights to encryption might be incompatible with fighting terrorism.

Result: e-Evidence makes for a pessimistic 0/5 on the rightsometer, with nothing to give confidence that this controversial proposal is being reassessed.

Artificial Intelligence (AI)
Key legislation: none proposed yet – but both von der Leyen and Reynders promised “horizontal” legislation in 100 days

Jourová emphasised that fundamental rights in AI innovation will “ensure that our solutions put people first, and will be more sustainable as a result”. Vestager added that ethics will be at the heart of AI policy, and Reynders that Europe’s “added value” is in bringing protection for privacy and data to future AI legislation.

Result: a promising 4/5 on the rightsometer; we welcome the Commissioners’-designate focus on fundamental rights when implementing AI-based technologies.

Where does that leave our digital rights?

Disinformation, Artificial Intelligence, privacy, and mitigating platform power were all given substantive commitments by the Commissioners-designate. Protecting fundamental rights online was, thankfully, a persistent concern for all the nominees. Certain topics, such as “digital literacy” were mentioned, but not given any flesh, and nominees also declined to answer a number of “too specific” questions. Although there was lots about which we can be optimistic, the balance between rights and law enforcement or innovation means that we should stay cautious.

Access Now: Meet the European Commissioners: Who will shape the next five years of digital policy in the EU? (27.09.2019)
https://www.accessnow.org/meet-eu-commissioners/

EDRi: Open letter to EU Member States: Deliver ePrivacy now! (10.10.2019)
https://edri.org/open-letter-to-eu-member-states-deliver-eprivacy-now/

Access Now, Civil Liberties Union for Europe and European Digital Rights: Joint Report on Informing the “Disinformation” Debate (18.10.2018)
https://edri.org/files/online_disinformation.pdf

(Contribution by Ella Jakubowska, EDRi intern)

close
18 Oct 2019

EU copyright dialogues: The next battleground to prevent upload filters

By Ella Jakubowska

On 15 October, the European Commission held the first of the stakeholder dialogues, mandated by Article 17 of the EU copyright Directive, inviting 65 organisations to help map current practices, and opening the door for deeper collaboration in the future.

Organisations from all sides of the debate were able to present their positions. While the first meeting focused on music, software and gaming, the next one will focus on audiovisual, visual, sports and text. These live-streamed dialogues are probably the last window of opportunity at the EU level for those who campaigned against upload filters in the copyright Directive to achieve the alleged goals of the Directive – harmonisation and modernisation of the copyright framework – without the collateral damage to citizens’ liberties. If the dialogues fail to achieve this, the battle will move to EU Member States.

The Copyright Directive was adopted as part of plans to unite Europe’s Digital Single Market in June 2019 – just over a year after the General Data Protection Regulation (GDPR) was adopted, and in the midst of an ongoing struggle over the proposed ePrivacy Regulation. The contentious Directive was welcomed by rightsholders who were keen to see online platforms take responsibility for copyright infringement; but it received criticism across civil society, academia, UN Special Rapporteur on Freedom of Expression David Kaye and even Edward Snowden, for enabling the removal of citizens’ legal content by automatic filters.

“Techno-solutionism” as a knee-jerk reaction

Techno-solutionism describes attempts to solve any and all problems with technology. The technologically-focused approach taken in the Directive and advocated for by some rightsholders is the wrong solution for the alleged problem (lack of negotiating power between rightsholders and streaming services). The upload filters deriving from Article 17 are severely error-prone (from cat purring being mistaken for copyrighted music, to evidence of war crimes being lost) and do not understand the full range of nuanced human expression, for example caricature, parody or pastiche. This situation empowers tech giants, harms small and medium enterprises, and fails to adequately protect authors. Furthermore, Article 17(7) of the Directive offers only limited mandatory exceptions for the use of content for quotation, parody or pastiche. Member States still have the opportunity to go beyond these exceptions and make all exceptions and limitations mandatory. However, the proposed automated filters will not be able to deal with the analysis of most of them. A more nuanced approach towards copyrighted content will be needed, including human supervision.

Violations and harms in the current situation

More than just theoretically flawed, the application of the copyright Directive could lead to violation of freedoms. So-called “copyright trolling” is a phenomenon used to either extort or censor individual users. When implementing the Directive, Member States should enable systems that penalise such abuses. Furthermore, the use of automated filters may collide with Article 22 of the GDPR which gives the right to data subjects not to be subject to a decision based solely on automated processing if that decision significantly affects them. How this will be dealt with in practice is to be seen.

Fundamental incompatibility with the human right to redress

The right to redress is a fundamental principle for this Directive to avoid collateral damages. The current redress mechanism has already been shown to be inadequate, as platforms are likely to turn to their Terms of Service as the excuse to delete content rather than going through the hassle of deciding if this or that exception or limitation in the Directive protects their right to use copyrighted content. We hope that the non-judicial redress mechanisms mentioned in Article 17(9) are easily and freely available to anyone needing them.

Reframing the debate to prevent violations of free expression

If the goal is indeed to target services that unfairly benefit from authors’ work, then the definition of Online Content Sharing Service Providers (OCSSPs) must be made more specific; it has to better reflect the few services that specifically profit from infringing copyright at large scale to the extent that they become alternatives to paid streaming services and that do not adequately remunerate rightsholders. Another possible solution is to reverse the burden of proof so that disputed content is not immediately removed. In essence, silence cannot play to the disadvantage of citizens: if platforms ask rightsholders for a licence, and the rightsholder does not react, this should mean that the “best efforts” threshold to obtain license has been met by the platform. If a rightsholder asks to block the content of a user and the user claims that they were within their right, the silence of the rightsholder should imply that the disputed content stays or is reinstated as soon as possible. In the case of disagreement in the dispute, human intervention would be appropriate.

The next stakeholder meeting will be held on 5 November.

First meeting of the Stakeholder Dialogue on Art 17 of the Directive on Copyright in the Digital Single Market (15.10.2019)
https://ec.europa.eu/digital-single-market/en/news/first-meeting-stakeholder-dialogue-art-17-directive-copyright-digital-single-market

Organisation of a stakeholder dialogue on the application of Article 17 of Directive on Copyright in the Digital Single Market (28.08.2019)
https://ec.europa.eu/digital-single-market/en/news/organisation-stakeholder-dialogue-application-article-17-directive-copyright-digital-single

All you need to know about copyright and EDRi (15.03.2019)
https://edri.org/all-you-need-to-know-about-copyright-and-edri/

Copyfails: time to #fixcopyright! (23.05.2016)
https://edri.org/copyfails/

Article 17 Stakeholder Dialogue: We’ll Continue to Advocate for Safeguarding User Rights (08.10.2019)
https://www.communia-association.org/2019/10/08/article-17-stakeholder-dialogue-well-continue-advocate-safeguarding-user-rights/

(Contribution by Ella Jakubowska, EDRi intern)

close
17 Oct 2019

Trilogues on terrorist content: Upload or re-upload filters? Eachy peachy.

By Chloé Berthélémy

On 17 October 2019, the European Parliament, the Council of the European Union (EU) and the European Commission started closed-door negotiations, trilogues, with a view to reaching an early agreement on the Regulation on preventing the dissemination of terrorist content online.

The European Parliament improved the text proposed by the European Commission by addressing its dangerous pitfalls and by reinforcing rights-based and rights-protective measures. The position of the Council of the European Union, however, supported the“proactive measures” the Commission suggested, meaning potential “general monitoring obligations” and in practice, automated detection tools and upload filters to identity and delete “terrorist content”.

Finding middle ground

In trilogue negotiations, the parties – the European Parliament, Commission, and Council – attempt to reach a consensus starting from what can be very divergent texts. In the Commission’s and Council’s version of the proposed Regulation, national competent authorities have the option to force the use of technical measures upon service providers. The Parliament, on the contrary, deleted all references to forced pro-activity and thus, put in line the Regulation with Article 15 of the E-Commerce Directive that prohibits obligations on platforms to generally monitor the user-generated content they host on their platforms.

Ahead of the negotiations, the European Commission was exploring the possibility to suggest “re-upload filters” instead of upload filters as a way towards building a compromise. Also known as “stay-down filters”, these filters distinguish themselves from regular ones by only searching, identifying and taking down content that has been already taken down once. This is to ensure that a content that was first deemed illegal would stay down and does not spread further online.

Upload or re-upload filters: What’s the difference?

“Re-upload filters” entail the use of automated means and the creation of hash databases that contain digital hash “fingerprints” of every piece of content that hosting providers have identified as illegal and removed. They also mean that all user-generated content published on the intermediaries’ services is monitored and compared with the material contained in those databases, and is filtered out in case of a match. As the pieces of content included in those databases are in most cases not subject to a court’s judgment, this practice could amount to an obligation of general monitoring, which is prohibited under Article 15 of the E-Commerce Directive.

Filters are not equipped to make complex judgments on the legality of content posted online. They do not understand the context in which content is published and shared, and as a result, they often make mistakes. Such algorithmic tools do not take proper account of the legal use of the content, for example for educational, artistic, journalistic or research purposes, for expressing polemic, controversial and dissident views in the context of public debates or in the framework of awareness raising activities. They risk accidentally suppressing legal speech, with exacerbated impacts on already marginalised individual internet users.

Human rights defenders as collateral damage

The way the hash databases will be formed will likely reflect discriminatory societal biases. Indeed, certain types of content and speech are getting more reported than others. The decision by the platforms to characterise them as illegal and to add them to the databases often mirrors societal norms. As a result, content related to Islamic terrorism propaganda will be more likely targeted than white supremacist content – even in cases in which the former is actually a documentation of human rights violations or is serving an awareness-raising purpose against terrorist recruitment. Hash databases of alleged illegal content are not accountable, transparent and democratically audited and controlled and will likely disadvantage certain users based on their ethnic background, gender, religion, language, or location.

In addition, re-upload filters are easy to circumvent on mainstream platforms: Facebook declared that it has over 800 distinct edits of the Christchurch shooting video in its hash database because users constantly modified the original material in order to trick automatic identification. Lastly, hash databases and related algorithms are being developed by dominant platforms, which have the resources to invest in such sophisticated tools. Obliging all other actors on the market to adopt such databases risks reinforcing their dominant position.

A more human rights compatible approach would follow the Parliament’s proposal, in which platforms are required to implement measures – exclusive of monitoring and automated tools – only after it received a substantial number of removal orders and that do not hamper their users’ freedom of expression and right to receive and impart information. The negotiating team from the European Parliament should defend the improvements achieved after arduous negotiations with the Parliament’s different political groups and committees. Serious problems, such as terrorism, require serious legislation, and not technological solutionism.

Terrorist content online Regulation: Document pool
https://edri.org/terrorist-content-regulation-document-pool/

Open letter on the Terrorism Database (05.02.2019)
https://edri.org/open-letter-on-the-terrorism-database/

Terrorist Content Regulation: Successful “damage control” by LIBE Committee (08.04.2019)
https://edri.org/terrorist-content-libe-vote/

Vice, Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too (25.04.2019)
https://www.vice.com/en_us/article/a3xgq5/why-wont-twitter-treat-white-supremacy-like-isis-because-it-would-mean-banning-some-republican-politicians-too

(Contribution by Chloé Berthélémy, EDRi)

close
10 Oct 2019

Open letter to EU Member States: Deliver ePrivacy now!

By EDRi

On 11 October 2019, EDRi, together with four other civil society organisations, sent an open letter to EU Member States, to urge to conclude the negotiations on the ePrivacy Regulation. The letter highlights the urgent need for a strong ePrivacy Regulation in order to tackle the problems created by the commercial surveillance business models, and expresses the deep concerns by the fact that the Member States, represented in the Council of the European Union, still have not made decisive progress, more than two and a half years since the Commission presented the proposal.

You can read the letter here (pdf) and below:

Open letter to EU Member States
11.10.2019

Dear Minister,

We, the undersigned organisations, urge you to swiftly reach an agreement in the Council of the European Union on the draft ePrivacy Regulation.

We are deeply concerned by the fact that, more than two and a half years since the Commission presented the proposal, the Council still has not made decisive progress. Meanwhile, one after another, privacy scandals are hitting the front pages, from issues around the exploitation of data in the political context, such as “Cambridge Analytica”, to the sharing of sensitive health data. In 2019, for example, an EDRi/CookieBot report demonstrated how EU governments unknowingly allow the ad tech industry to monitor citizens across public sector websites.1 An investigation by Privacy International revealed how popular websites about depression in France, Germany and the UK share user data with advertisers, data brokers and large tech companies, while some depression test websites leak answers and test results to third parties.2

A strong ePrivacy Regulation is necessary to tackle the problems created by the commercial surveillance business models. Those business models, which are built on tracking and cashing in on people’s most intimate moments, have taken over the internet and create incentives to promote disinformation, manipulation and illegal content.

What Europe gains with a strong ePrivacy Regulation

The reform of the current ePrivacy Directive is essential to strengthen – not weaken – individuals’ fundamental rights to privacy and confidentiality of communications.3 It is necessary to make current rules fit for the digital age.4 In addition, a strong and clear ePrivacy Regulation would push Europe’s global leadership in the creation of a healthy digital environment, providing strong protections for citizens, their fundamental rights and our societal values. All this is key for the EU to regain its digital sovereignty, one of the goals set out by Commission President-elect Ursula von der Leyen in her political guidelines.5

Far from being an obstacle to the development of new technologies and services, the ePrivacy Regulation is necessary to ensure a level playing field and legal certainty for market operators.6 It is an opportunity for businesses7 to innovate and invest in new, privacy-friendly, business models.

What Europe loses without a strong ePrivacy Regulation

Without the ePrivacy Regulation, Europe will continue living with an outdated Directive which is not being properly enforced8 and the completion of our legal framework initiated with the General Data Protection Regulation (GDPR) will not be achieved. Without a strong Regulation, surveillance-driven business models will be able to cement their dominant positions9 and continue posing serious risks to our democratic processes.10 11 The EU also risks losing the position as global standard-setter and digital champion that it earned though the adoption of the GDPR.

As a result, people’s trust in internet services will continue to fall. According to the Special Eurobarometer Survey of June 2019 the majority of users believe that they only have partial control over the information they provide online, with 62% of them being concerned about it.

The ePrivacy Regulation is urgently needed

We expect the EU to protect people’s fundamental rights and interests against practices that undermine the security and confidentiality of their online communications and intrude in their private lives.

As you meet today to discuss the next steps of the reform, we urge you to finally reach an agreement to conclude the negotiations and deliver an upgraded and improved ePrivacy Regulation for individuals and businesses. We stand ready to support your work.

Yours sincerely,

AccessNow
The European Consumer Organisation (BEUC)
European Digital Rights (EDRi)
Privacy International
Open Society European Policy Institute (OSEPI)

1 https://www.cookiebot.com/media/1121/cookiebot-report-2019-medium-size.pdf
2
https://privacyinternational.org/long-read/3194/privacy-international-study-shows-your-mental-health-sale
3
https://edpb.europa.eu/our-work-tools/our-documents/outros/statement-32019-eprivacy-regulation_en
4
https://www.beuc.eu/publications/beuc-x-2017-090_eprivacy-factsheet.pdf
5
https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf
6
https://edpb.europa.eu/our-work-tools/our-documents/outros/statement-32019-eprivacy-regulation_en
7 https://www.beuc.eu/publications/beuc-x-2018-108-eprivacy-reform-joint-letter-consumer-organisations-ngos-internet_companies.pdf
8
https://edri.org/cjeu-cookies-consent-or-be-tracked-not-an-option/
9
http://fortune.com/2017/04/26/google-facebook-digital-ads/
10
https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook
11
https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy

Read more:

Open letter to EU Member States on ePrivacy (11.10.2019)
https://edri.org/files/eprivacy/ePrivacy_NGO_letter_20191011.pdf

Right a wrong: ePrivacy now! (09.10.2019)
https://edri.org/right-a-wrong-eprivacy-now/

Civil society calls Council to adopt ePrivacy now (05.12.2018)
https://edri.org/civil-society-calls-council-to-adopt-eprivacy-now/

ePrivacy reform: Open letter to EU member states (27.03.2018)
https://edri.org/eprivacy-reform-open-letter-to-eu-member-states/

close
09 Oct 2019

Right a wrong: ePrivacy now!

By Ella Jakubowska

When the European Commission proposed to replace the outdated and improperly enforced 2002 ePrivacy Directive with a new ePrivacy Regulation in January 2017, it marked a cautiously hopeful moment for digital rights advocates across Europe. With the backdrop of the General Data Protection Regulation (GDPR), adopted in May 2018, Europe took a giant leap ahead for the protection of personal data. Yet by failing to adopt the only piece of legislation protecting the right to privacy and to the confidentiality of communications, the Council of the European Union seems to have prioritised private interests over the fundamental rights, securities and freedoms of citizens that would be protected by a strong ePrivacy Regulation.

This is not an abstract problem; commercial surveillance models – where businesses exploit user data as a key part of their business activity – pose a serious threat to our freedom to express ourselves without fear. This model relies on profiling, essentially putting people into the boxes in which the platforms believe they belong – which is a very slippery slope towards discrimination. And when children increasingly make up a large proportion of internet users, the risks become even more stark: their online actions could impact their access to opportunities in the future. Furthermore, these models are set up to profit from the mass sharing of content, and so platforms are perversely incentivised to promote sensationalist posts that could harm democracy (for example political disinformation).

The rise of highly personalised adverts (”microtargeting”) means that online platforms increasingly control and limit the parameters of the world that you see online, based on their biased and potentially discriminatory assumptions about who you are. And as for that online quiz about depression that you took? Well, that might not be as private as you thought.

It is high time that the Council of the European Union takes note of the risks to citizens caused by the current black hole where ePrivacy legislation should be. Amongst the doom and gloom, there are reasons to be optimistic. If delivered in its strongest form, an improved ePrivacy Regulation helps to complement the GDPR; will ensure compliance with essential principles such as privacy by design and by default; will tackle the perversive model of online tracking and the disinformation it creates; and it will give power back to citizens over their private life and interests. We urge the Council to swiftly update and adopt a strong, citizen-centered ePrivacy Regulation.

e-Privacy revision: Document pool
https://edri.org/eprivacy-directive-document-pool/

ePrivacy: Private data retention through the back door (22.05.2019)
https://edri.org/eprivacy-private-data-retention-through-the-back-door/

Captured states – e-Privacy Regulation victim of a “lobby onslaught” (23.05.2019)
https://edri.org/coe-eprivacy-regulation-victim-of-lobby-onslaught/

NGOs urge Austrian Council Presidency to finalise e-Privacy reform (07.11.2018)
https://edri.org/ngos-open-letter-austrian-council-presidency-eprivacy/

e-Privacy: What happened and what happens next (29.11.2017)
https://edri.org/e-privacy-what-happened-and-what-happens-next/

(Contribution by Ella Jakubowska, EDRi intern)

close
09 Oct 2019

Why weak encryption is everybody’s problem

By Ella Jakubowska

Representatives of the UK Home Department, US Attorney General, US Homeland Security and Australian Home Affairs have joined forces to issue an open letter to Mark Zuckerberg. In their letter of 4 October, they urge Facebook to halt plans for end-to-end (aka strong) encryption across Facebook’s messaging platforms, unless such plans include “a means for lawful access to the content of communications”. In other words, the signatories are requesting what security experts call a “backdoor” for law enforcement to circumvent legitimate encryption methods in order to access private communications.

The myth of weak encryption as safe

Whilst the US, UK and Australia are adamant that their position enhances the safety of citizens, there are many reasons to be skeptical of this. The open letter uses emotive language to emphasise the risk of “child sexual exploitation, terrorism and extortion” that the signatories claim is associated with strong encryption, but fails to give a balanced assessment which includes the risks to privacy, democracy and most business transactions of weak encryption. By positioning weak encryption as a “safety” measure, the US, UK and Australia imply (or even explicitly state) that supporters of strong encryption are supporting crime.

Government-led attacks on everybody’s digital safety aren’t new. Since the 1990s, the US has tried to prevent the export of strong encryption and—when that failed—worked on forcing software companies to build backdoors for the government. Those attempts were called the first “Cryptowars”.

In reality, however, arguing that encryption mostly helps criminals is like saying that vehicles should be banned and all knives blunt because both have been used by criminals and terrorists. Such reasoning ignores that in the huge majority of cases strong encryption greatly enhances people’s safety. From enabling secure online banking, to keeping citizens’ messages private, internet users and companies rely on strong encryption every single day. It is the foundation of trusted, secure digital infrastructure. Weak encryption, on the other hand, is like locking the front door of your home, only to leave the back one open. Police may be able to enter more easily – but so too can criminals.

Strong encryption is vital for protecting civil rights

The position outlined by the US, UK and Australia is fundamentally misleading. Undermining encryption harms innocent citizens. Encryption already protects some of the most vulnerable people worldwide – journalists, environmental activists, human rights defenders, and many more. State interception of private communications is frequently not benign: government hacking can and does lead to egregious violations of fundamental rights.

For many digital rights groups, this debate is the ultimate groundhog day, and valuable effort is expended year after year on challenging the false dichotomy of “privacy versus security”. Even the European Commission has struggled to sort fact from fear-mongering.

However, it is worth remembering that Facebook’s announcement to encrypt some user content is so far just that: an announcement. The advertisement company’s approach to privacy is a supreme example of surveillance capitalism: protecting some users when it is favourable for their PR, and exploiting user data when there is a financial incentive to do so. To best protect citizens’ rights, we need a concerted effort between policy-makers and civil society to enact laws and build better technology so that neither our governments nor social media platforms can exploit us and our personal data.

The bottom line

Facebook must refuse to build anything that could constitute a backdoor into their messaging platforms. Otherwise, Facebook is handing the US, UK and Australian governments a surveillance-shaped skeleton key that puts Facebook users at risk worldwide. And once that door is unlocked, there will be no way to control who will enter.

EDRi Position paper on encryption: High-grade encryption is essential for our economy and our democratic freedoms (25.01.2015)
https://www.edri.org/files/20160125-edri-crypto-position-paper.pdf

Encryption – debunking the myths (03.05.2017)
https://www.edri.org/files/20160125-edri-crypto-position-paper.pdf

Encryption Workarounds: a digital rights perspective (12.09.2017)
https://edri.org/files/encryption/workarounds_edriposition_20170912.pdf

(Contribution by Ella Jakubowska, EDRi intern)

close
09 Oct 2019

Content regulation – what’s the (online) harm?

By Access Now and EDRi

In recent years, the national legislators in EU Member States have been pushing for new laws to combat negative societal phenomena such as hateful or terrorist content online. These regulatory efforts have one common denominator: they shift the focus from conditional intermediary liability to holding intermediaries directly responsible for the dissemination of illegal content on their platforms.

Two prominent legislative and policy proposals of this kind that will significantly shape the European debate around the future of intermediary liability are the UK White Paper on Online Harms and the newly adopted Avia law in France.

UK experiment to fight online harm: overblocking on the horizon

In April 2019, the United Kingdom (UK) government proposed a new regulatory model including a so-called statutory duty of care, saying it wants to make platform companies more responsible for the safety of online users. The paper foresees a future regulation that holds companies accountable for a set of vaguely predefined “online harms” which includes illegal content, but also users’ behaviours that are deemed harmful but not necessarily illegal.

EDRi and Access Now have long emphasised the risk that privatised law enforcement and heavy reliance on automated content filters pose to human rights online. In this vein, multiple civil society organisations, including EDRi members (for example Article 19 and Index on Censorship), have warned against the alarming measures the British approach contains. To avoid liability, the envisaged duty of care, combined with heavy fines, create incentives for platform companies to block online content even if its illegality is doubtful. The regulatory approach proposed by the UK Online Harms White Paper will actually coerce companies into adopting content filtering measures that will ultimately result in the general monitoring of all information being shared on online platforms. Due to over-compliance with states’ demands, such conduct often amounts to illegitimate restrictions on freedom of expression or, in other words, online censorship. Moreover, a general monitoring obligation is currently prohibited by European law.

The White Paper also covers activities and content that are not illegal but potentially undesirable such as advocacy of self-harm or disinformation. This is highly problematic in regard to the human rights law criteria that guide restrictions on freedom of expression. The ill-defined and vague concept of “online harms” cannot serve as a proper legal basis to justify an interference with fundamental rights. Ultimately, the proposal falls short in providing substantial evidence that sustains its approach. It also bluntly fails to address key issues of online regulation, such as content distribution on platforms that lies in the core of companies’ business models, opacity of algorithms, violations of online privacy, and data breaches.

French Avia law: Another “quick fix” to online hate speech?

Inspired by the German Network Enforcement Act (NetzDG), France has now adopted its own piece of legislation, the so-called Avia law – named after the Rapporteur of the file, Member of the Parliament Laetitia Avia. Similarly to NetzDG, the law requires companies to remove manifestly illegal content within 24 hours from receiving a notification about it.

Following its German predecessor, the Avia law encourages companies to be overly cautious and pre-emptively remove or block content to avoid substantial fines for non-compliance. The time frame in which they are expected to take action is too short to allow for a proper assessment of each case at stake. Importantly, the French Parliament does not discard the possibility for companies to resort to automated decision-making tools in order to process the notices. Such measure in itself can be grounded in the legitimate objectives to fight against hatred, racism, LGBTQI+-phobic and other discriminatory content. However, tackling hate speech and other context-dependent content requires careful and balanced analysis. In practice, leaving the decision to private actors without adequate oversight and redress mechanisms to decide whether a piece of content meets the threshold of “manifest illegality” will be damaging for freedom of expression and the rule of law.

However, there are also positive aspects of the Avia law. It provides safeguards of the procedural fairness by establishing the requirement for individuals who notify potentially illegal content to state the reasons why they believe it should be removed. Moreover, the law sets out obligations for companies to establish internal complaints and appeal mechanisms for both the notifier and the content provider. Transparency obligations on content moderation policies are also introduced. Lastly, the regulator established by the Avia law does not focus its evaluation solely on numbers of content removed but also on scrutinising over-removal when monitoring compliance with the law.

Do not fall into the same trap!

We are currently witnessing regulatory efforts at the national and European level that seek to provide easy solutions to online phenomena such as terrorist content or hate speech, ignoring the underlying societal issues. Most of the suggested solutions rely on filters and content recognition technologies with limited ability to assess the context in which a given piece of content has been posted. Proper safeguards and requirements for meaningful transparency that should accompany these measures are often sidetracked by legislators. However, it is not only the EU and its Member States where similar trends can be observed. For instance, the Australian government recently adopted a new bill imposing criminal liability on executives of social media platforms. Section 230 of the American Communication Decency Act (CDA) may be placed under the review process triggered by a presidential executive order that significantly limits the liability protections granted to platform companies by the existing law.

Legislators around the globe have one thing in common: the urge to “eradicate” vaguely defined “online harms”. The rhetoric of danger comprised in online harm has become a driving force behind regulatory responses in liberal democracies. This is exactly the kind of logic frequently used by authoritarian regimes to restrict legitimate debate. With the upcoming Digital Services Act (DSA) potentially replacing the E-Commerce Directive in Europe, the EU has an extraordinary opportunity to become a trend-setter, establishing high standards for the protection of users’ human rights, while addressing legitimate concerns stemming from the spread of illegal online content.

For this to happen, the European Commission should propose a law that imposes workable, transparent and accountable content moderation procedures and a functioning notice and action system on platforms. Such positive examples of tackling platform regulation should be combined with forceful actions against the centralisation of power over data and information into the hands of few big tech companies. EDRi and Access Now developed specific recommendations containing human rights safeguards, which should be comprised in both content moderation exercised by companies and State regulation tackling illegal online content. The European Commission’s responsibility is to ensure fundamental rights during the process of drafting any future legislation governing intermediary liability and redefining content governance online.

For this to happen, the European Commission should propose a law that imposes workable, transparent and accountable content moderation procedures and a functioning notice and action system on platforms. Such positive examples of tackling platform regulation should be combined with forceful actions against the centralisation of power over data and information into the hands of few big tech companies. EDRi and Access Now developed specific recommendations containing human rights safeguards, which should be comprised in both content moderation exercised by companies and State regulation tackling illegal online content. The European Commission’s responsibility is to ensure fundamental rights during the process of drafting any future legislation governing intermediary liability and redefining content governance online.

Access Now
https://www.accessnow.org/

Access Now’s human rights guide on protecting freedom of expression in the era of online content moderation (13.05.2019)
https://www.accessnow.org/cms/assets/uploads/2019/05/AccessNow-Preliminary-Recommendations-On-Content-Moderation-and-Facebooks-Planned-Oversight-Board.pdf

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

French law aimed at combating hate content on the internet (09.07.2019)
http://www.assemblee-nationale.fr/15/pdf/ta/ta0310.pdf

UK: Online Harms Strategy must “design in” fundamental rights (10.04.2019)
https://edri.org/uk-online-harms-strategy-must-design-in-fundamental-rights/

UK’s Online Harms White Paper (04.2019)
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf

(Contribution by Eliška Pírková, EDRi member Access Now, and Chloé Berthélémy, EDRi)

close
03 Oct 2019

CJEU ruling on fighting defamation online could open the door for upload filters

By EDRi

Today, on 3 October 2019, the Court of Justice of the European Union (CJEU) gave its ruling in the case C‑18/18 Glawischnig-Piesczek v Facebook. The case is related to injunctions obliging a service provider to stop the dissemination of a defamatory comment. Some aspects of the decision could pose a threat for freedom of expression, in particular that of political dissidents who may be accused of defamatory practices.

This ruling could open the door for exploitative upload filters for all online content.

said Diego Naranjo, Head of Policy at EDRi.

Despite the positive intention to protect an individual from defamatory content, this decision could lead to severed freedom of expression for all internet users, with particular risks for political critics and human rights defenders by paving the road for automated content recognition technologies.

The ruling confirms that a hosting provider such as Facebook can be ordered, in the context of an injunction, to seek and identify, among all the content shared by its users, content that is identical to the content characterised as illegal by a court. If the obligation to block future content applies to all users on a large platform like Facebook, the Court has in effect considered it to be in line with the E-Commerce Directive that courts demand automated upload filters and blurred the distinction between general and specific monitoring in its previous case law. EDRi is concerned that automated upload filters for identical content will not be able to distinguish between legal and illegal content, in particular when applied to individual words that could have very different meanings depending on the context and the intent of the user.

EDRi welcomes the Court’s attempt to find a balance of rights (namely freedom of expression, freedom to conduct a business) and to limit the impact on freedom of expression by differentiating between the search for identical and equivalent content. However, the ruling seems to be departing from previous case law regarding the ban on general monitoring obligations (for example Scarlet v. Sabam). Imposing filtering of all communications in order to look for one specific piece of content, using non-transparent algorithms, is likely to unduly restrict legal speech – independently from whether they look for content that is identical or equivalent to illegal content.

The upcoming review of the E-Commerce Directive should clarify, among other things, how to deal with online content moderation. In the context of this review, it is crucial to address the problem of disinformation without unduly interfering with the fundamental right to freedom of expression for users of the platform. Specifically, the business model based on amplifying certain type of content in the detriment of other in order to attract users’ attention requires urgent scrutiny.

Read more:

No summer break for free expression in Europe: Facebook cases that matter for human rights (23.09.2019)
https://www.accessnow.org/no-summer-break-for-free-expression-in-europe-facebook-cases-that-matter-for-human-rights/

CJEU case C-18/18 – Glawischnig-Piesczek Press Release (03.10.2019)
https://curia.europa.eu/jcms/upload/docs/application/pdf/2019-10/cp190128en.pdf

CJEU case C-18/18 – Glawischnig-Piesczek ruling (03.10.2019)
http://curia.europa.eu/juris/document/document.jsf?text=&docid=218621&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=192400

Fighting defamation online – AG Opinion forgets that context matters (19.06.2019)
https://edri.org/fighting-defamation-online-ag-opinion-forgets-that-context-matters/

Dolphins in the Net, a New Stanford CIS White Paper
https://cyberlaw.stanford.edu/files/Dolphins-in-the-Net-AG-Analysis.pdf

SABAM vs Netlog – another important ruling for fundamental rights (16.02.2012)
https://edri.org/sabam_netlog_win/

close