23 Jul 2019

Civil society calls for a proper assessment of data retention

By Diego Naranjo

In preparation of a possible proposal for new legislation, the European Commission is conducting informal dialogues with different stakeholders to research about the possibilities of data retention legislation that complies with the rulings of the Court of Justice of the European Union (CJEU) and the European Court of Human Rights (ECtHR). As part of these dialogues, EDRi has previously met with the Commission Directorate-General for Migration and Home Affairs (DG HOME) on 6 June 2019.

On 22 July 2019, 30 civil society organisations sent an open letter to the European Commission President-elect Ursula von der Leyen and Commissioners Avramopoulos, Jourová and King, urging the commissions of the EU Commission to conduct an independent assessment on the necessity and proportionality of existing and potential legislative measures around data retention. Furthermore, signatories asked to ensure that the debate around data retention does not prevent the ePrivacy Regulation from being adopted swiftly.

You can read the letter here, and below:

22 July 2019

By email:
President-elect von der Leyen
First Vice-President Timmermans

CC:
Commissioner Avramopoulos
Commissioner Jourová
Commissioner King

Dear First Vice-President Timmermans,
Dear President-elect von der Leyen,

The undersigned organisations represent non-governmental organisations working to protect and promote human rights in digital and connected spaces. We are writing to put forward suggestions to ensure compliance with the EU Charter of Fundamental Rights and the CJEU case law on data retention.

EU Member States (and EEA countries) have had different degrees of implementation of the CJEU ruling on 8 April 2014 invalidating the Data Retention Directive. EDRi’s 2015 study reported that six Member States1 have kept data retention laws which contained features that are similar or identical to those that were ruled to be contrary to the EU Charter. Other evidence pointed in the same direction.2 While personal data of millions of Europeans were being stored illegally, the European Commission had not launched any infringement procedures. On 21 December 2016, the CJEU delivered its judgment in the Tele2/Watson case regarding data retention in Member States’ national law. In the aftermath of this judgment, the Council Legal Service unambiguously concluded that “a general and indiscriminate retention obligation for crime prevention and other security reasons would no more be possible at national level than it is at EU level, since it would violate just as much the fundamental requirements as demonstrated by the Court’s insistence in two judgments delivered in Grand Chamber.”3

On 6 June 2019 the Council adopted “conclusions on the way forward with regard to the retention of electronic communication data for the purpose of fighting crime” which claim that “data retention is an essential tool for investigating serious crime efficiently”. The Council tasked the Commission to “gather further information and organise targeted consultations as part of a comprehensive study on possible solutions for retaining data, including the consideration of a future legislative initiative.”

While the concept of blanket data retention appeals to law enforcement agencies, it has never been shown that the indiscriminate retention of traffic and location data of over 500 million Europeans was necessary, proportionate or even effective.

Blanket data retention is an invasive surveillance measure of the entire population. This can entail the collection of sensitive information about social contacts (including business contacts), movements and private lives (e.g. contacts with physicians, lawyers, workers councils, psychologists, helplines, etc.) of hundreds of millions of Europeans, in the absence of any suspicion. Telecommunications data retention undermines professional confidentiality and deters citizens from making confidential communications via electronic communication networks. The retained data is also of high interest for criminal organisations and unauthorised state actors from all over the world. Several successful data breaches have been documented.4 Blanket data retention also undermines the protection of journalistic sources and thus compromises the freedom of the press. Overall, it damages preconditions of open and democratic societies.

The undersigned organisations have therefore been in constructive dialogue with the European Commission services to ensure that the way forward includes the following suggestions:

  • The European Commission commissions an independent, scientific study on the necessity and proportionality of existing and potential legislative measures around data retention, including a human rights impact assessment and a comparison of crime clearance rates;
  • The European Commission and the Council ensure that the debate around data retention does not prevent the ePrivacy Regulation from being adopted swiftly;
  • The European Commission tasks the EU Fundamental Rights Agency (FRA) to prepare a comprehensive study on all existing data retention legislation and their compliance with the Charter and the CJEU/European Court of Human Rights case law on this matter;
  • The European Commission considers launching infringement procedures against Member States that enforce illegal data retention laws.

We look forward to your response and remain at your disposal to support the necessary initiatives to uphold EU law in this policy area.

Signatories:

European Digital Rights (EDRi)
Access Now
Chaos Computer Club (CCC)
Bits of Freedom
Asociatia pentru Tehnologie si Internet (ApTI)
Epicenter.works
Electronic Frontier Norway (EFN)
Dataskydd.net
Digital Rights Ireland
Digitalcourage
Privacy International
Vrijschrift
FITUG e.V.
Hermes Center for Transparency and Digital Human Rights
Access Info
Aktion Freiheit statt Angst
Homo Digitalis
Electronic Privacy Information Center (EPIC)
Iuridicum Remedium (IuRe)
La Quadrature du Net
Associação D3 – Defesa dos Direitos Digitais
IT-Political Association of Denmark (IT-Pol)
Panoptykon Foundation
Open Rights Group (ORG)
Electronic Frontier Finland (Effi ry)
Državljan D
Deutsche Vereinigung für Datenschutz e. V. (DVD)
//datenschutzraum
Föreningen för Digitala Fri- och Rättigheter (:DFRI)
AK Vorrat


1) https://edri.org/edri-asks-european-commission-investigate-illegal-data-retention-laws/
2) See, for example. Privacy International, 2017, National Data Retention Laws since Tele-2/Watson Judgment: https://www.privacyinternational.org/sites/default/files/2017-12/Data%20Retention_2017.pdf
3) Council document 5884/17, paragraph 13
4) A recent example can be found here: https://techcrunch.com/2019/06/24/hackers-cell-networks-call-records-theft/

close
22 Jul 2019

Von der Leyen: An ambitious agenda for digital rights

By Diego Naranjo

On 16 July 2019, the European Parliament elected Ursula von der Leyen as President of the European Commission with 383 votes, which is only nine votes above the minimum needed. Parts of the Socialists, Liberals, and Greens initially had doubts regarding the candidate. However, her speech in the Plenary before the vote and the agenda that incorporated a number of key issues to Greens and Socialists probably had an influence in changing some MEP’s minds. The European United Left / Nordic Green Left (GUE-NGL) group continued to oppose von der Leyen because of her lack of ambition on social policies and climate change, on top of her background as Minister of Defence.

Her “agenda for Europe” includes six key areas, one of which is dedicated to how to make “Europe fit for the digital age”.

Towards a “Europe fit for the digital age”?

Most pressingly, within her first 100 days in office, von der Leyen wants to propose a legislation on the human and ethical implications of artificial intelligence (AI). It’s difficult to foresee how, even if building from the work already done by the High-Level Expert Group working on AI, the Commission could possibly do, within this timeframe, all internal preparations, public consultations and inter-service consultations necessary in order to formulate a meaningful and future-proof piece of legislation on this topic.

We agree with von der Leyen with her aims of achieving technological sovereignty in “some critical technology areas”. Even though Europe has set a strong agenda on data protection and competition, building the necessary hardware and software that, among other features, protect privacy by design and by default could ensure better protection for all.

After the adoption of the Copyright Directive and the Terrorist Content Regulation, which both regulate some types of online content, it has now become popular to also look at updating the old E-Commerce Directive. The Directive dates back to the year 2000 and since then remains one of the cornerstones of European internet regulation. Von der Leyen has made updating it one of the goals of her Commission Presidency: a new “Digital Services Act” will be proposed in order to “upgrade our liability and safety rules for digital platforms, services, and products”. The devil of this proposal-to-come is in the details. Not all of what has leaked out of the Commission so far about the Digital Services Act gives reason for concern. But policymakers’ desire to force US-based platforms to assume more responsibility in tackling unwanted online content may just lead to increased censorship in over-removal of totally legal expressions of opinion.

The Council needs to be pushed, too

Von der Leyen’s proposal to “jointly define standards for this new generation of technologies that will become the global norm” is welcome. A first good step would be if she led the Council of the European Union into promptly adopting a General Approach on the ePrivacy Regulation, which has been blocked by Member States for now almost 1000 days.

Of course, von der Leyen’s proposals, including the non-digital ones, are merely a general framework and not a workplan. So much more needs to be done by the Commission to meet the criticism over a lack of ambition regarding certain policies, such as climate emergency.

The coming months will be key for civil society to make sure that the EU starts implementing the “human-centric” vision emphasised by von der Leyen, and upholds the values of sustainability to fight climate emergency, social justice to prevent poverty, as well as more democracy and transparency to prevent authoritarian tendencies. In order to do all of this, the protection of fundamental rights in a technology-intensive and increasingly interconnected environment will be more necessary than ever.

More responsibility to online platforms – but at what cost? (19.07.2019)
https://edri.org/more-responsibility-to-online-platforms-but-at-what-cost/

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

E-Commerce review: Technology is the solution. What is the problem? (11.07.2019)
https://edri.org/e-commerce-review-technology-is-the-solution-what-is-the-problem/

Civil society calls Council to adopt ePrivacy now (05.12.2018)
https://edri.org/civil-society-calls-council-to-adopt-eprivacy-now/

close
19 Jul 2019

More responsibility to online platforms – but at what cost?

By EDRi

In the European Commission’s internal note published by Netzpolitik.org on 16 July 2019, the Commission presents current problems around the regulation of digital services and proposes a revision of the current E-Commerce Directive. Such a revision would have a huge impact on fundamental rights and freedoms. This is why it’s crucial for the EU to get it right this time.

From a fundamental rights perspective, the internal note contains a few good proposals, a number of bad ones, and one pretty ugly.

The Good

In its note, the Commission maintains that no online platform should be forced to actively monitor all user-uploaded content. As the Commission rightly says, this prohibition of a general monitoring obligation is a “foundational cornerstone” of global internet regulation. It has allowed the internet to become a place for everyone to enjoy the freedom of expression and communicate globally without having to go through online gatekeepers.

Unfortunately, the note is somewhat weak with regard to upload filters: the Commission merely says that transparency and accountability should be “considered” when algorithmic filters are used. It’s no secret though that filtering algorithms make too many mistakes – they do not understand context, political activism, or satire. Creating more transparency around the logic and data behind algorithmic decisions of big online platforms is certainly a good start. However, it isn’t enough to prevent fundamental rights violations and discrimination.

The Commission note recognises the need to re-assess whether and how different platform companies should be regulated differently. However, the Commission should bear in mind that not all so-called “hosting intermediaries” covered by its note are platforms similar to Facebook, Google, or Twitter. There are successful hosting intermediaries across Europe – such as the file sharing provider Tresorit or the hosting company Gandi.net – which host their customers’ content in a largely “content-agnostic” way.

Lastly, the Commission acknowledges that since the adoption of the current E-Commerce Directive, the internet has changed considerably: A small number of US-based online platforms developed into businesses with unprecedented market power. The Commission, therefore, proposes to examine “options to define a category of services on the basis of a large or significant market status (…) in order to impose supplementary conditions”. When doing so, the Commission must be careful to clearly define which services would fall into which category in order to avoid collateral damage for other types of services, including those who have not yet been invented.

The Bad

To guide its future policy initiatives, the Commission says it wants to analyse policy options for both illegal and potentially “harmful” but legal content. While the definition of what is illegal is decided as part of the democratic process in our societies, it is unclear which content should be considered “harmful” and who makes that call. Moreover, the term “harmful” lacks a legal definition, is vague and its meaning often varies depending on the context, time, and people involved. The term should therefore not form the basis for lawful restrictions on freedom of expression under European human rights law.

The Commission acknowledges that when platform companies are pushed to take measures against potentially illegal and harmful content, their balancing of interests pushes them to over-block legal speech and monitor people’s communications to prevent legal liability for user content. At the same time, the note proposes that harmful content should best be dealt with through voluntary codes of conduct, which shifts the censorship burden to the platform companies. However, companies’ terms of service are often a convenient way of removing legal content as they are vague and redress mechanisms are often ineffective.

Drawing from the experience of the EU’s Code of Conduct on Hate Speech and the Code of Practice on Disinformation, this approach pushes platform companies to measure their success only based on the number of deleted accounts or removed pieces of content as well as on how speedy those deletions have been carried out. It does not, however, improve legal certainty for users, nor does it provide for proper review and counter-notice mechanisms, or allow for investigations into whether or not the removed material was even illegal.

The Ugly

The leaked Commission note claims that recent sector-specific content regulation laws such as the disastrous Copyright Directive or the proposed Terrorist Content Regulation had left “most of” the current E-Commerce Directive unaffected. This is euphemistic at the very least. According to these pieces of legislation, all online platforms are required to pro-actively monitor and search for certain types of content to prevent their upload, which makes them “active” under current case law and should flush their liability exemption down the toilet. This is not changed by the Copyright Directive’s claim on paper that it shall not affect the E-Commerce’s liability rules.

But the EU Commission turning a blind eye on this obvious legal inconsistency isn’t the only ugly thing in there. The question that remains unanswered is: how can the Commission save the current liability exemption for the sake of internet users and their fundamental rights, all the while making it compatible with the hair-raising provisions of the Copyright Directive? It looks almost as if everybody secretly hopes that by the time the new Digital Services Act comes into force, sectoral laws such as the Copyright Directive will have been declared invalid by the European Court of Justice.

While such a turn of events would certainly be welcome, in the meantime the Commission should approach this issue transparently, and discuss with civil society and other stakeholders how the liability exemption can be salvaged and the negative impact of the sectoral laws contained.

EDRi’s recommendations

How to move ahead with an upcoming review of the E-Commerce Directive? Here are our recommendations (that are also explained in more detail in our blog post series on liability and content moderation):

  1. Before reviewing the E-Commerce Directive, policymakers should answer the following questions: What are the problems that the Digital Services Act should address? Is there a clear understanding of the nature, size, and evolution of those problems? And what does scientific evidence tell us about which solutions could help us solve those problems?
  2. The Commission should analyse and mitigate any unwanted negative side effects of the proposals in the planned Digital Services Act in order to avoid that problems are only treated superficially while doing immense damage to fundamental rights such as the freedom of expression of millions of people.
  3. The Commission should strictly limit the scope of the Digital Services Act to illegal content. It would be wise to not venture into the slippery territory of potentially harmful but legal content. Instead, the Commission should follow its own 2016 Communication on platforms.
  4. Policymakers should seize this unique opportunity to put in place fundamental rights safeguards, due process guarantees, as well as a binding notice-and-action regime. That way, the EU could take the global lead by setting the right standards for moderating online content while protecting fundamental rights.

Leaked document: EU Commission mulls new law to regulate online platforms (16.07.2019)
https://netzpolitik.org/2019/leaked-document-eu-commission-mulls-new-law-to-regulate-online-platforms/

EU Commission’s leaked internal note on revision of the current E-Commerce Directive (16.07.2019)
https://cdn.netzpolitik.org/wp-upload/2019/07/Digital-Services-Act-note-DG-Connect-June-2019.pdf

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

E-Commerce review: Technology is the solution. What is the problem? (11.07.2019)
https://edri.org/e-commerce-review-technology-is-the-solution-what-is-the-problem/

close
17 Jul 2019

New privacy alliance to be formed in Russia, Central and Eastern Europe

By EDRi

Civil Society advocates from Russia, and Central and Eastern Europe have joined forces to form a new inter-regional NGO to promote privacy in countries bordering the EU.

The initiative also involves activists from the Post-Soviet countries, the Balkans and the EU Accession candidate countries. One of its primary objectives is to build coalitions and campaigns in countries that have weak or non-existing privacy protections. The project emerged from a three-day regional privacy workshop held earlier in 2019 at the Nordic Non-violence Study Group (NORNONS) centre in Sweden. The workshop agreed that public awareness of privacy in the countries represented was at a dangerously poor level, and concluded that better collaboration between advocates is one solution.

There has been a pressing need for such an alliance for many years. A vast arc of countries from Russia through Western Asia and into the Balkans has been largely overlooked by international NGOs and intergovernmental organisations (IGOs) concerned with privacy and surveillance.

The initiative was convened by Simon Davies, founder of EDRi member Privacy International and the Big Brother Awards. He warned that government surveillance and abuse of personal information has become endemic in many of those countries:

“There is an urgency to our project. The citizens of places like Azerbaijan, Kazakhstan, Kyrgyzstan, Turkmenistan, and Armenia are exposed to wholesale privacy invasion, and we have little knowledge of what’s going on there. Many of these countries have no visibility in international networks. Most have little genuine civil society, and their governments engage in rampant surveillance. Where there is privacy law, it is usually an illusion. This situation applies even in Russia.”

A Working Group has been formed involving advocates from Russia, Serbia, Georgia, Ukraine and Belarus, and its membership includes Danilo Krivokapić from EDRi member SHARE foundation in Serbia. The role of this group is to steer the legal foundation of the initiative and to approve a formal Constitution.

The initiative’s Moderator is the former Ombudsman of Georgia, Ucha Nanuashvili. He too believes that the new NGO will fill a desperately needed void in privacy activism:

“In my view, regions outside the EU need this initiative. Privacy is an issue that is becoming more prominent, and yet there is very little regional collaboration and representation. Particularly in the former Soviet states there’s an urgent need for an initiative that brings together advocates and experts in a strong alliance.”

Seed funding for the project has been provided by the Public Voice Fund of the Electronic Privacy Information Center (EPIC). EPIC’s president, Marc Rotenberg, welcomed the initiative and said he believed it would “contribute substantially” to the global privacy movement:

“We have been aware for some time that there is a dangerous void around privacy protection in those regions. We appreciate the good work of NGOs and academics to undertake this important collaboration.”

The Working Group hopes to formally launch the NGO in October in Albania. The group is presently considering several options for a name. Anyone interested in supporting the work of the initiative or wanting more information can contact Simon Davies at simon <at> privacysurgeon <dot> org.

The Nordic Nonviolence Study Group
https://www.nornons.org/

SHARE Foundation
https://www.sharefoundation.info/en/

EPIC’s Public Voice fund
https://epic.org/epic/publicvoicefund/

Mass surveillance in Russia
https://en.wikipedia.org/wiki/Mass_surveillance_in_Russia

Ucha Nanuashvili, Georgian Human Rights Centre
http://www.hridc.org/

close
17 Jul 2019

The first GDPR fines in Romania

By ApTI

The Romanian Data Protection Authority (DPA) has recently announced the first three fines applied in Romania as a result of the enforcement of the EU General Data Protection Regulation (GDPR).

On 27 June 2019, a Romanian bank was fined approximately 130 000 euro (613 912 RON) for revealing too much personal information such as the national identification number and the postal address of the payment issuers to the payment recipients. According to the Romanian DPA, 337 042 individuals were affected between February and December 2018.

The Romanian DPA based their decision on Article 5 (1) c) of the GDPR on data minimisation, and also mentioned Recital 78. Inadequate technical and organisational measures and the inability to design processes that reduce the collected personal information to the minimum necessary led to the failure to integrate appropriate safeguards for protecting individuals’ data.

It could be discussed why the DPA did not fine the bank for breaching Article 5 (1) b) on purpose limitation and Article 5 (1) f) on integrity and confidentiality of the data. The national identification number and the address of individuals were collected for internal identification purposes, not for revealing this information to third parties. The bank failed to ensure the security and confidentiality of the data by revealing it to the beneficiaries of the payments, exposing individuals’ personal data to potential unauthorised or unlawful processing.

Another fine of approximately 15 000 euro (71 028 RON) followed on 2 July 2019. It was given to a hotel unit for breaching the security of personal information of its clients. A list with information about 46 guests who were serving breakfast at the hotel was photographed by an unauthorised person and published online. The hotel filed a data security breach to the DPA and after the investigation, the DPA fined the hotel based on Article 24 of the GDPR for the lack of implementing appropriate technical and organisational safeguards to protect personal data. The hotel did not take measures to assure the security of the data against accidental or illegal disclosure and against unauthorised processing. The DPA’s decision reminds of Recital 75 mentioning the risk and type of damages associated with the processing of personal data.

A third GDPR fine was announced on 12 July 2019. It was applied to a website that, due to improper security measures after a platform migration, allowed public access via two links to a list of files, including details of several business contacts, which included name, surname, postal address, email, phone, workplace and transaction details. The company was fined 3 000 euros.

The first GDPR fine (04.07.2019)
https://www.dataprotection.ro/index.jsp?page=Comunicat_Amenda_Unicredit&lang=en

The second GDPR fine (only in Romanian, 08.07.2019)
https://www.dataprotection.ro/index.jsp?page=O_noua_amenda_GDPR&lang=ro

The third GDPR fine (only in Romanian, 12.07.2019)
https://www.dataprotection.ro/?page=2019%20A%20treia%20amenda%20in%20aplicarea%20RGPD&lang=ro

(Contribution by Valentina Pavel, EDRi member ApTI, Romania)

close
17 Jul 2019

The digital rights of LGBTQ+ people: When technology reinforces societal oppressions

By Chloé Berthélémy

Online surveillance and censorship impact everyone’s rights, and particularly those of already marginalised groups such as lesbian, gay, bisexual, transgender and queer and others (LGBTQ+) people. The use of new technologies usually reinforces existing societal biases, making those communities particularly prone to discrimination and security threats. As a follow-up to Pride Month, here is an attempt to map out what is at stake for LGBTQ+ people in digital and connected spaces.

The internet has played a considerable role in the development and organisation of the LGBTQ+ community. It represents an empowering tool for LGBTQ+ people to meet with each other, to build networks and join forces, to access information and acquire knowledge about vital health care issues, as well as to express, spread and strengthen their political claims.

We’ve got a monopoly problem

The centralisation of electronic communications services around a few platforms has created new barriers for LGBTQ+ people to exercising their digital rights. Trapped into a network effect – whereby the decision to leave the platform would represent a big lost for the user – most of them have only one place to go to meet and spread their ideas. The content they post is moderated arbitrarily by these privately owned platforms, following standards and “community guidelines”.

Powerful platforms’ practices result in many LGBTQ+ accounts, posts and themed ads being taken down on, while homophobic, transphobic and sexist content often remains untouched. In practice, these double-standards for reporting and banning contents mean that when queer and transgender people use typical slurs to reclaim and take pride from them, social media reviewers often disregard the intent and block them; whereas attackers use identical offensive terms without fearing the same punishment. More, the process being automated just worsens the injustice as algorithms are incapable of making the difference between the two cases. This leaves the LGBTQ+ community disenfranchised without reasonable explanations and possibilities to appeal the decisions.

Community standards apply both on the open part of social media as well as on the related private chats (such as Facebook Messenger and Wired). Since those networks play an essential role to discuss queer issues, to date and to engage in sexting, LGBTQ+ people become highly dependent on the platforms’ tolerance for sexual expression and nudity. Sometimes sudden changes in community guidelines are carried out without any user consultation or control. For example, the LGBTQ+ community was particularly harmed when Tumblr decided not to allow Not Safe For Work (NSFW) content anymore and Facebook banned “sexual solicitation” on its services.

Another example of companies’ policies affecting transgender people specifically is the rising trend of applying strict real-name policies online. The authentication requirement based on official ID documents prevents transgender people to use their new name and identity. For many of them, notably those living in repressive countries, it is difficult to obtain the change of their name and gender markers. As a consequence, they see their accounts deleted on a regular basis, after a few months of use, losing all their content and contacts. With little chance to retrieve their accounts, their freedoms online are severely hindered.

There is no such thing as a safe space online

Even when LGBTQ+ people leave the social media giants, they cannot necessarily turn to a safer platform online. Grindr, the biggest social networking app for gay, bi, trans, and queer people, was used by Egyptian authorities to track down and persecute LGBTQ+ people. Using fake profiles, the police is able to collect evidence, imprison, torture and prosecute for illegal sexual behaviour. This led to a chilling effect on the community, reluctant to engage in new encounters.

Other dangerous practices imply the outing of LGBTQ+ people online. For instance, a Twitter account was purposely set up in Paraguay to expose people’s sexual orientation by extracting revealing contents, such as nude pictures posted on Grindr, and posting them publicly. Despite many appeals made against the account, it disseminated content during six weeks before the platform finally deleted it. The damages for the victims are long-term and irreparable. This is, in particular, the cases in countries where there is no hate crime legislation, or where this legislation is not fully implemented, resulting in impunity for State and non-State actor’s homophobic and transphobic violence.

Technology is not neutral

The way those services and apps are built with poor security levels reflects their Western-centric, heteronormative and gender-biased nature. This endangers already vulnerable LGBTQ+ communities when they develop globally and become viral, especially in the Global South. Technologies, in particular emerging ones, can be misused to discriminate. For instance, a facial recognition system has been trained to recognise homosexual people based on their facial features. Not only the purpose of this technology is dubious, but it is also dangerous if scaled up and lands in the hands of repressive governments.

The main problem is that communities are not involved in the production stages. It’s hard to incentivise profit-driven companies to change their services according to specific needs while maintaining them free and accessible for all. Marginalised groups can usually not afford additional premium security features. Furthermore, the developers community remains in the majority white, middle aged and heterosexual, with little understanding of the local realities and dangers in other regions in the world. Encouraging LGBTQ+ people with diverse regional backgrounds to join this community would improve sensibly the offer of community-led, free, open and secure services. A lot remains to be made to push companies to engage with affected communities in order to develop tools that are privacy friendly and inclusive-by-design.

A leading good example is the Grindr initiative by EDRi member ARTICLE 19 that includes the ability to change the app’s icon appearance and the addition of a password security lock to better protect LGBTQ+ users.

This article is based on an interview of Eduardo Carrillo, digital LGBTQI+ activist in Paraguay and project director at TEDIC. TEDIC applies a gender perspective to its work on digital rights and carries out support activities for the local LGBTQ+ community to mitigate the discrimination it encounters.

In this article, we use the term LGBTQ+ to designate Lesbians, Gays, Bisexuals, Transsexuals, Queers, and all the other gender identities and sexual orientations that do not correspond to the heterosexual and cisgender (when the gender identity of a person matches the sex assigned at birth) norms.

Women’s rights online: tips for a safer digital life (08.03.2019)
https://edri.org/womens-rights-online-tips-for-a-safer-digital-life/

How to retrieve our account on Facebook: Online censorship of the LGBTQI community (02.05.2018)
https://www.tedic.org/como-recuperar-nuestra-cuenta-en-facebook-censura-en-linea-hacia-colectivo-lgbtqi/

App Security Flaws Could Create Added Risks for LGBTQI Communities (17.12.2018)
https://cyborgfeminista.tedic.org/app-security-flaws-could-create-added-risks-for-lgbtqi-communities/

No, Facebook’s updated sex policy doesn’t prohibit discussing your sexual orientation (06.12.2018)
https://www.wired.com/story/facebooks-hate-speech-policies-censor-marginalized-users/

Designing for the crackdown (25.4.2018)
https://www.theverge.com/2018/4/25/17279270/lgbtq-dating-apps-egypt-illegal-human-rights

(Contribution by Chloe Berthélémy, EDRi)

close
17 Jul 2019

“SIN vs Facebook”: First victory against privatised censorship

By Panoptykon Foundation

In an interim measures ruling on 11 June 2019, the District Court in Warsaw has temporarily prohibited Facebook from removing fan pages, profiles, and groups run by Civil Society Drug Policy Initiative (SIN) on Facebook and Instagram, as well as from blocking individual posts. SIN, a Polish non-profit organisation promoting evidence-based drug policy, filed a lawsuit in May 2019 against Facebook, with the support of the Polish EDRi member Panoptykon Foundation.

SIN filed a lawsuit against Facebook in May 2019 that blocking content restricted, in an unjustified way, the possibility to disseminate information by the organisation, express opinions and communicate with their audience. Concerned about further censorship, SIN was not able to freely carry out their educational activities. Moreover, the removal of content suggested that the organisation’s activity on the platforms was harmful, thus undermining SIN’s credibility. By allowing the request for interim measures, the court decided that SIN substantiated their claims. Although it is only the beginning of the trial, this is a first important step in the fight against excessive and opaque content blocking practices on social media.

This interim measures ruling from 11 June imply that – at least until the final judgement in the case – SIN’s activists may carry out their activities on drug policy without concerns that they will suddenly lose the possibility to communicate with their audience. The court has furthermore obliged Facebook to store profiles, fan pages and groups deleted in 2018 and 2019 but not to restore them. The storage would allow SIN – if they were to win the case – to have the content quickly restored, together with the entire published content, comments by other users, as well as followers and people who liked the fan page. This is not the only good news: the court has also confirmed that Polish users can enforce their rights against the tech giant in Poland. Unfortunately, the court did not approve, at this stage, the request to restore pre-emptively deleted fan pages, profiles, and groups for the duration of the trial. The court argued that it would be a far-fetched measure, which would, in practice, lead to recognising the fundamental claim expressed in the lawsuit.

In June 2019, educational posts in which SIN’s educators cautioned against the use of some substances during hot weather were again blocked from Instagram. SIN received a warning that “subsequent infringements of the community standards” may result in removing the entire profile. Now, after the interim measures ruling, they will be able to catch a breath and continue their social media activity without worrying that they may be blocked again at any time. This “private censorship” is one of the modern-day threats to freedom of speech. Platforms such as Facebook and Instagram have become “gatekeepers” to online expression, and, just as in the SIN’s case, there’s no viable alternative to them. Getting blocked on these platforms is a significant limitation to disseminating information.

The court’s interim decision means that for now, Facebook will not be able to arbitrarily decide to block content published by SIN. By issuing this decision, the court also recognised its jurisdiction to hear the case in Poland under Polish law. This is great news for Polish users and possibly users from other EU Member States. In cases against global internet companies the possibility to claim one’s rights before the domestic court is a condition for a viable access to justice – if the only possibility was to sue them in their home countries, the costs, the language barrier, and a foreign legal system would make it very difficult, if not impossible, for most citizens to exercise their rights.

However, the court’s decision is not final – after the delivery of the decision, Facebook Ireland will have the right to appeal it with the Appeal Court. The decision has been made ex parte, solely on the basis of a position presented by SIN, without the participation of the other party, and it only implements a temporary measure and does not prejudge the final verdict of the entire trial – the main proceedings are only about to begin.

Panoptykon Foundation
https://panoptykon.org/

SIN vs Facebook
https://panoptykon.org/sinvsfacebook

SIN v Facebook: Tech giant sued over censorship in landmark case (08.05.2019)
https://edri.org/sin-v-facebook/

(Contribution by Anna Obem and Dorota Glowacka, EDRi member Panoptykon Foundation, Poland)

close
17 Jul 2019

Microsoft Office 365 banned from German schools over privacy concerns

By Jan Penfrat

In a bombshell decision, the Data Protection Authority (DPA) of the German Land of Hesse has ruled that schools are banned from using Microsoft’s cloud office product “Office 365”. According to the decision, the platform’s standard settings expose personal information about school pupils and teachers “to possible access by US officials” and are thus incompatible with European and local data protection laws.

The ruling is the result of several years of domestic debate about whether German schools and other state institutions should be using Microsoft software at all, reports ZDNet. In 2018, investigators in the Netherlands discovered that the data collected by Microsoft “could include anything from standard software diagnostics to user content from inside applications, such as sentences from documents and email subject lines.” All of which contravenes the General Data Protection Regulation (GDPR) and potentially local laws for the protection of personal data of underaged pupils.

While Microsoft’s “Office 365” is not a new product, the company has recently changed its offer in Germany: Until now, it provided customers with a special German cloud version hosted on servers run by German telecoms giant Deutsche Telekom. Deutsche Telekom served as a kind of infrastructure trustee, putting customer data outside the legal reach of US law enforcement and intelligence agencies. In 2018, however, Microsoft announced that in 2019 this special arrangement will be terminated and German customers are offered to move to Microsoft’s standard cloud offer in the EU.

Microsoft insists that nothing changes for customers because the new “Office 365” servers are also located in the EU or even in Germany. However, legal developments in the US have put the Hesse DPA on high alert: The newly enacted “US Cloud Act” empowers US government agencies to request access to customer data from all US-based companies no matter where their servers are located.

To make things even worse, Germany’s Federal Office for Information Security (BSI) recently expressed concerns about telemetry data that the Windows 10 operating system collects and transmits to Microsoft. So even if German (or European) schools stopped using the company’s cloud office, its ubiquitous Windows operating system also leaks data to the US with no control or stopping it for users.

School pupils are usually not able to give consent, Max Schrems from EDRi member noyb told ZDNet. “And if data is sent to Microsoft in the US, it is subject to US mass surveillance laws. This is illegal under EU law.” Even if that was legal, says the Hesse DPA, schools and other public institutions in Germany have a “particular responsibility for what they do with personal data, and how transparent they are about that.”

It seems that fulfilling those responsibilities hasn’t been possible when using Microsoft Office 365. In a next step, it is crucial that European DPAs discuss those findings within the European Data Protection Board to come to an EU-wide rule that protects children’s personal data from unregulated access by US agencies. Otherwise European schools would be well-advised to switch to privacy-friendly alternatives such as Linux, LibreOffice, and Nextcloud.

Statement of the Commissioner for Data Protection and Freedom of Information of the Land of Hesse regarding the use of Microsoft Office 365 in schools in Hesse (only in German, 09.07.2019)
https://datenschutz.hessen.de/pressemitteilungen/stellungnahme-des-hessischen-beauftragten-f%C3%BCr-datenschutz-und

Microsoft Office 365: Banned in German schools over privacy fears (12.07.2019)
https://www.zdnet.com/article/microsoft-office-365-banned-in-german-schools-over-privacy-fears

Microsoft offers cloud services in new German data centers as of 2019 in reaction to changes in demand (only in German, 31.08.2018)
https://news.microsoft.com/de-de/microsoft-cloud-2019-rechenzentren-deutschland/

(Contribution by Jan Penfrat, EDRi)

close
11 Jul 2019

E-Commerce review: Technology is the solution. What is the problem?

By Kirsten Fiedler

This is the second article in our series on Europe’s future rules for intermediary liability and content moderation. You can read the introduction here.

When it comes to tackling illegal and “harmful” content online, there’s a major trend in policy-making: Big tech seems to be both the cause of and the solution to all problems.

However, hoping that technology would solve problems that are deeply rooted in our societies is misguided. Moderating content that people post online can only be seen as a partial solution to much wider societal issues. It might help us to deal with some of the symptoms but it won’t solve the root of the problems.

Secondly, giving in to hypes and trying to find “quick fixes” for trending topics occupying the news cycle is not good policy-making. Rushed policy proposals rarely allow for an in-depth analysis of the full picture, or for the consideration and mitigation of potential side-effects. Worse, such proposals are often counter-productive.

For instance, an Oxford Internet Institute study revealed that the problem of disinformation on Twitter during the EU elections had been overstated. Less than 4% of sources circulating on that platform during the researchers’ data collection period qualified as disinformation. Overall, users shared far more links to established news outlets than to suspicious online sources.

Therefore, before launching any review of the EU’s e-Commerce Directive, policy-makers should ask themselves: What are the problems we want to address? Do we have a clear understanding of the nature, scale, and evolution of those problems? What can be done to efficiently tackle them? Even though the Directive’s provisions on the liability of online platforms also impact content moderation, the upcoming e-Commerce review is too important to be hijacked by the blind ambition to eradicate all objectionable speech online.

In Europe, the decision about what is illegal is part of the democratic process in the Member States. Defining “harmful online content” that is not necessarily illegal is much harder and there is no process or authority to do it. Therefore, regulatory efforts should focus on illegal content only. The unclear and slippery territory of attempting to regulate “harmful” (but legal) content puts our democracy, our rights and our freedoms at risk. When reviewing the E-Commerce Directive, the EU Commission should follow its Communication on Platforms from 2016.

Once the problems are properly defined and policy-makers agree on what kind of illegal activity should be tackled online, any regulation of online platforms and uploaded content should take a closer look at the services it attempts to regulate, as well as assess how content spreads and at what scale. Regulating the internet as if it consisted only of Google and Facebook, will inevitably lead to an internet that does consist only of Google and Facebook. Unfortunately, as we’ve seen in the debate around upload filters during the copyright reform, political thinking around speech regulation is focused on a small number of very dominant players (most notably Facebook, YouTube, and Twitter). This political focus paradoxically turned out to reinforce the dominant market position of existing monopolies. It would be very unfortunate to repeat the mistakes that were made, in the context of legislation which has as far-reaching consequences as the EU’s e-Commerce Directive.


European Commission Communication on Online Platforms and the Digital Single Market Opportunities and Challenges for Europe (25.05.2016)
https://ec.europa.eu/digital-single-market/en/news/communication-online-platforms-and-digital-single-market-opportunities-and-challenges-europe

Junk News during the EU Parliamentary Elections (21.05.2019)
https://comprop.oii.ox.ac.uk/research/eu-elections-memo/

close
09 Jul 2019

Join EDRi as policy intern!

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 42 digital human rights organisations from across Europe. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, freedom of expression, and access to information.

Join EDRi now and become a superhero for the defence of our rights and freedoms online!

The EDRi office in Brussels is currently accepting applications for an intern to support our policy team. This is your opportunity to get first-hand experience in EU policy-making and contribute to a change in favour of digital rights and freedoms across Europe. The internship will go from 15 September (or 1 October) to 31 March and is remunerated minimum 750 EUR per month (according to “convention d’immersion professionnelle”).

Key tasks:

  • Conducting research and analysis on topics such as data protection, privacy, net neutrality, intermediary liability and freedom of expression, encryption, cross-border access to data and digital trade
  • Drafting regular internal policy updates for the EDRi network
  • Monitoring international, EU and national policy developments
  • Organising and participating in meetings and events
  • Supporting the creation of the EDRi-gram newsletter
  • Assisting in the preparation of draft reports, presentations and other internal and external documents
  • Supporting EDRi’s day-to-day office management
  • Developing public education materials

Qualifications:

  • Demonstrated interest in and enthusiasm for human rights and technology-related legal issues
  • Good understanding of European policy-making
  • Excellent research and writing skills
  • Fluent command of spoken and written English
  • Computer literacy
  • Experience in the fields of data protection, privacy, copyright, net neutrality, intermediary liability and freedom of expression, surveillance and law enforcement, or digital trade is an asset

Read about previous internship experiences at EDRi here.

How to apply:

To apply please send a maximum one-page cover letter and a maximum two-page CV in English and only as pdf files (other formats such as doc and docx will not be accepted) to jan >dot< penfrat >at< edri >dot< org.

The closing date for applications is 22 July 2019. The interviews and written assignments will take place between 26-30 August 2019. Please note that due to limited resources only shortlisted candidates will be contacted.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. People from all backgrounds are encouraged to apply and we strive to have a diverse and inclusive working environment.

Twitter_tweet_and_follow_banner
close