05 Dec 2018

Civil society calls Council to adopt ePrivacy now

By EDRi

EDRi has joined a letter of 30 representatives from civil society and online industry, to the Ministers in the Telecoms Council, to express the wide support for the ePrivacy Regulation. The letter describes the clear and urgent need to strengthen privacy and security of electronic communications in the online environment, especially in the wake of repeated scandals and practices that undermine citizens’ right to privacy and the trust on online services.

The support from privacy-friendly businesses such as Qwant, Startpage, Startmail, TeamDrive, Tresorit, Tutanota, ValidSoft or WeTransfer show the positive implications that ePrivacy will have for a dynamic and innovative European internet industry. The collaboration between organisations defending citizens’ rights and industry representatives underlines that both EU citizens and privacy-friendly business models have much to gain from a strong ePrivacy Regulation.

EDRi full-heartedly supports the call of the coalition to the Council of Minister’s to finally move the ePrivacy discussion forward, so that a compromise with the European Parliament can be found before the elections in May 2019. If this is achieved, European citizens will benefit from a strong privacy regime and a less intrusive, more dynamic and more innovative EU data economy.

You can find the letter here.

Open letter to EU member states from consumer groups, NGOs and industry representatives in support of the ePrivacy Regulation (03.12.2018)
https://edri.org/files/eprivacy/20181203-Joint-letter-NGO-and-industry.pdf

ePrivacy review: document pool
https://edri.org/eprivacy-directive-document-pool/

Council continues limbo dance with the ePrivacy standards (24.10.2018)
https://edri.org/council-continues-limbo-dance-with-the-eprivacy-standards/

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
07 Nov 2018

UN Special Rapporteur analyses AI’s impact on human rights

By Chloé Berthélémy

In October 2018, the United Nations (UN) Special Rapporteur for the promotion and protection of the right to freedom of opinion and expression, David Kaye, released his report on the implications of artificial intelligence (AI) technologies for human rights. The report was submitted to the UN General Assembly on 29 August 2018 but has only been published recently. The text focuses in particular on freedom of expression and opinion, privacy and non-discrimination. In the report, the UN Special Rapporteur David Kaye first clarifies what he understands by artificial intelligence and what using AI entails for the current digital environment, debunking several myths. He then provides an overview of all potential human rights affected by relevant technological developments, before laying down a framework for a human rights-based approach to these new technologies.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

1. Artificial intelligence is not a neutral technology

David Kaye defines artificial intelligence as a “constellation of processes and technologies enabling computers to complement or replace specific tasks otherwise performed by humans” through “computer code […] carrying instructions to translate data into conclusions, information or outputs.” He states that AI is still highly dependent on human intervention, as humans need to design the systems, define their objectives and organise the datasets for the algorithms to function properly. The report points out that AI is therefore not a neutral technology, as the use of its outputs remains in the hands of humans.

Current forms of AI systems are far from flawless, as they demand human scrutiny and sometimes even correction. The report considers that AI systems’ automated character, the quality of data analysis as well as systems’ adaptability are sources of bias. Automated decisions may produce discriminatory effects as they rely exclusively on specific criteria, without necessarily balancing them, and they undermine scrutiny and transparency over the outcomes. AI systems also rely on huge amounts of data that has questionable origins and accuracy. Furthermore, AI can identify correlations that can be mistaken for causations. David Kaye points at the main problem of adaptability when losing human supervision: it poses challenges to ensuring transparency and accountability.

2. Current uses of artificial intelligence interfere with human rights

David Kaye describes three main applications of AI technology that pose important threats to several human rights.

The first problem raised is AI’s effect on freedom of expression and opinion. On one hand, “artificial intelligence shapes the world of information in a way that is opaque to the user” and conceals its role in determining what the user sees and consumes. On the other, the personalisation of information display has been shown to reinforce biases and “incentivize the promotion and recommendation of inflammatory content or disinformation in order to sustain users’ online engagement”. These practices impact individuals’ self-determination and autonomy to form and develop personal opinions based on factual and varied information, therefore threatening freedom of expression and opinion.

Secondly, similar concerns can be raised in relation to our right to privacy, in particular with regard to AI-enabled micro-targeting for advertisement purposes. As David Kaye states, profiling and targeting users foster mass collection of personal data, and lead to inferring “sensitive information about people that they have not provided or confirmed”. The few possibilities to control personal data collected and generated by AI systems put into question the respect of privacy.

Third, the Special Rapporteur highlights AI as an important threat to our rights to freedom of expression and non-discrimination due to AI’s increasingly-allocated role in the moderation and filtering of content online. Despite some companies’ claims that artificial intelligence can support exceeded human capacities, the report sees the recourse to automate moderation as impeding the exercise of human rights. In fact, artificial intelligence is unable to resist discriminatory assumptions or to grasp sarcasm and the cultural context for each piece of content published. As a result, freedom of expression and our right not to be discriminated against can be severely hampered by delegating complex censorship exercises to AI and private actors.

3. A set of recommendations for both companies and States

Recalling that “ethics” is not a cover for companies and public authorities to neglect binding and enforceable human rights-based regulation, the UN Special Rapporteur recommends that “any efforts to develop State policy or regulation in the field of artificial intelligence should ensure consideration of human rights concerns”.

David Kaye suggests human rights should guide development of business practices, AI design and deployment and calls for enhanced transparency, disclosure obligations and robust data protection legislation – including effective means for remedy. Online service providers should make clear which decisions are made with human review and which by artificial intelligence systems alone. This information should be accompanied by explanations of the decision-making logic used by algorithms. Further, the “existence, purpose, constitution and impact” of AI systems should be disclosed in an effort to improve the level of individual users’ education around this topic. The report also recommends to make available and publicise data on the “frequency at which AI systems are subject to complaints and requests for remedies, as well as the types and effectiveness of remedies available”.

States are identified as key actors responsible for creating a legislative framework hospitable to a pluralistic information landscape, preventing technology monopolies and supportive of network and device neutrality.

Lastly, the Special Rapporteur provides useful tools to oversee AI development:

  1. human rights impact assessments performed prior, during and after the use of AI systems;
  2. external audits and consultations with human rights organisations;
  3. enabled individual choice thanks to notice and consent;
  4. effective remedy processes to end human rights violations.

UN Special Rapporteur on Freedom of Expression and Opinion Report on AI and Freedom of Expression (29.08.2018)
https://freedex.org/wp-content/blogs.dir/2015/files/2018/10/AI-and-FOE-GA.pdf

Civil society calls for evidence-based solutions to disinformation
(19.10.2018)
https://edri.org/civil-society-calls-for-evidence-based-solutions-to-disinformation/

(Contribution by Chloé Berthélémy, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
24 Oct 2018

Council continues limbo dance with the ePrivacy standards

By Yannic Blaschke

It’s been six-hundred-fifty-two days since the European Commission launched its proposal for an ePrivacy Regulation. The European Parliament took a strong stance towards the proposal when it adopted its position a year ago, but the Council of the European Union is still only taking baby steps towards finding its position.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

In their latest proposal, the Austrian Presidency of the Council continues, unfortunately, the trend of presenting the Council with suggestions that lower privacy protections that were proposed by the Commission and strengthened by the Parliament. In the latest working document that was published on 19 October 2018, it becomes apparent that we are far from having reached the bottom of what the Council sees as acceptable in treating our personal data as a commodity.

Probably the gravest change of the text is to allow the storing of tracking technologies on the individual’s computer without consent for websites that partly or wholly finance themselves through advertisement, provided they have informed the user of the existence and use of such processing and the user “has accepted this use” (Recital 21). The “acceptance” of such identifiers by the user as suggested is far from being the informed consent that the General Data Protection Regulation (GDPR) established as a standard in the EU. The Austrian Presidency text will put cookies which are necessary for a regular use (such as language preferences and contents of a shopping basket) on the same level as the very invasive tracking technologies which are being pushed by the Google/Facebook duopoly in the current commercial surveillance framework. This opens the Pandora’s box for more and more sharing, merging and reselling citizen’s data in huge online commercial surveillance networks, and micro-targeting them with commercial and political manipulation, without the knowledge of the person whose private information is being shared to a large number of unknown third parties.

One of the great added values of the ePrivacy Regulation (which was originally intended to enter into force at the same point in time as the GDPR) is that it’s supposed to raise the bar for companies and other actors who want to track citizens’ behaviour on the internet by placing tracking technologies on the users’ computers. Currently, such an accumulation of potentially highly sensitive data about an individual mostly happens without real knowledge of individuals, often through coerced (not freely given) consent, and the data is shared and resold extensively within opaque advertising networks and data-broker services. In a strong and future-proof ePrivacy Regulation, the collection and processing of such behavioural data thus needs to be tightly regulated and must be based on an informed consent of the individual – an approach that becomes now more and more jeopardised as the Council seems to become increasingly favourable to tracking technologies.

The detrimental change of Recital 21 is only one of the bad ideas through which the Austrian Presidency seeks to strike a consensus: In addition, there is for instance the undermining of the protection of “compatible further processing” (which is itself already a bad idea introduced by the Council) in Article 6 2aa (c), or the watering down of the requirements for regulatory authorities in Article 18, which causes significant friction with the GDPR. With one disappointing “compromise” after another, the ePrivacy Regulation becomes increasingly endangered of falling short on its ambition to end unwanted stalking of individuals on the internet.

EDRi will continue to observe the developments of the legislation closely and calls everyone in favour of a solid EU privacy regime that protects citizens’ rights and competition to voice their demands to their member states.

Five Reasons to be concerned about the Council ePrivacy draft (26.09.2018)
https://edri.org/five-reasons-to-be-concerned-about-the-council-eprivacy-draft/

EU Council considers undermining ePrivacy (25.07.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Your ePrivacy is nobody else’s business (30.05.2018)
https://edri.org/your-eprivacy-is-nobody-elses-business/

e-Privacy revision: Document pool (10.01.2017)
https://edri.org/eprivacy-directive-document-pool/

(Contribution by Yannic Blaschke, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
24 Oct 2018

ePrivacy: Public benefit or private surveillance?

By Yannic Blaschke

92 weeks after the proposal was published, the EU is still waiting for an ePrivacy Regulation. The Regulation is supposed to replace the current ePrivacy Directive, aligning it with the General Data Protection Regulation (GDPR).

While the GDPR regulates the ways in which personal data is processed in general, the ePrivacy Regulation specifically regulates the protection of privacy and confidentiality of electronic communications. The data in question not only includes the content and the “metadata” (data on when, where and to whom a person communicated) of communications, but also other identifiers such as “cookies” that are stored on users’ computers. To make the legislation fit for its purpose in regard to technological developments, the European Commission (EC) proposal addresses some of the major changes in communications of the last decade, including the use of so-called “over the top” services, such as WhatsApp and Viber.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

The Regulation is currently facing heavy resistance from certain sectors of the publishing and behavioural advertising industry. After an improved text was adopted by the European Parliament (EP), it is now being delayed at the Council of the European Union level, where EU Member States are negotiating the text.

One of the major obstacles in the negotiations is the question to what extent providers such as telecommunication companies can use metadata for other purposes than the original service. Some private companies – the same ones that questioned the need of consent from users in the GDPR – now re-wrapped their argument saying that an “overreliance” on consent would substantially hamper future technologies. Over-reliance on anything is not good, by definition, as is under-reliance, but such sophistry is a mainstay of lobby language.

However, this lobby attack omits reference to the fact that compatible further processing would not lead only to benign applications in the public interest: Since the proposal does not limit further processing to statistical or research purposes, it could just as well be used for commercial purposes such as commercial or political manipulation. But even with regard to the potentially more benevolent applications of AI, it should be kept in mind that automated data processing has in some cases shown to be highly detrimental to parts of society, especially vulnerable groups. This should not be ignored when evaluating the safety and privacy of aggregate data. For instance, while using location data for “smart cities” can make sense in some narrowly-defined circumstances when it is used for traffic control or natural disaster management, it gains a much more chilling undertone when it leads for instance to racial discrimination in company delivery services or law enforcement activities. It is easily imaginable that metadata, one of the most revealing and easiest to process forms of personal data, could be used for equally crude or misaligned applications, yielding highly negative outcomes for vulnerable groups. Moreover, where aggregate, pseudonymised data produces adverse outcomes for an individual, not even a rectification or deletion of the person’s data will lead to an improvement, as long as the accumulated data of similar individuals is still available.

Another pitfall of the supposedly private, ostensibly pseudonymised way of processing is that even if individual users are not targeted, companies may need to maintain the metadata of citizens in identifiable form to link existing data sets with new ones. This could essentially lead to a form of voluntary data retention, which might soon attract the interest of public security actors rapaciously seeking new data sources and new powers. If such access was granted, individuals would essentially be identifiable. Even retaining “only” aggregate data for certain societal groups or minorities might often already be enough to spark discriminatory treatment.

Although the Austrian Presidency of the Council of the European Union did include in their most recent draft compromise some noteworthy safeguards for compatible further processing, most notably the necessity to consult the national Supervisory Authority or to conduct a data protection impact assessment, the current proposal does not adequately empower individuals. Given that the interpretation of what is a “compatible” further processing may vary significantly among Member States (which would lead to years of litigation), it should be up to citizens to decide (and for the industry to prove) which forms of metadata processing are safe, fair and beneficial in society.

Five Reasons to be concerned about the Council ePrivacy draft (26.09.2018)
https://edri.org/five-reasons-to-be-concerned-about-the-council-eprivacy-draft/

EU Council considers undermining ePrivacy (25.07.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Your ePrivacy is nobody else’s business (30.05.2018)
https://edri.org/your-eprivacy-is-nobody-elses-business/

e-Privacy revision: Document pool (10.01.2017)
https://edri.org/eprivacy-directive-document-pool/

(Contribution by Yannic Blaschke, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
18 Oct 2018

#PrivacyCamp19 – Save the Date and Call for Panel Proposals

By EDRi

Join us for the 7th annual Privacy Camp!

Privacy Camp will take place on 29 January 2019 in Brussels, Belgium, just before the start of the CPDP conference. Privacy Camp brings together civil society, policy-makers and academia to discuss existing and looming problems for human rights in the digital environment.

Take me to the call for panel submissions.
Take me to the call for user story submissions.

Platforms, Politics, Participation

Privacy Camp 2019 will focus on digital platforms, their societal impact and political significance. Due to the rise of a few powerful companies such as Uber, Facebook, Amazon or Google, the term “platform” has moved beyond its initial computational meaning of technological architecture and has come to be understood as a socio-cultural phenomenon. Platforms are said to facilitate and shape human interactions, thus becoming important economic and political actors. While the companies offering platform services are increasingly the target of regulative action, they are also considered as allies of national and supranational actors in enforcing policies voluntarily and gauging political interest and support. Digital platforms employ business models that rely on the collection of large amounts of data and the use of advanced algorithms, which raise concerns about their surveillance potential and their impact on political events. Increasingly rooted in the daily life of many individuals, platforms monetise social interactions and turn to questionable labor practices. Many sectors and social practices are being “platformised”, from public health to security, from news to entertainment services. Lately, some scholars have conceptualised this phenomenon as “platform capitalism” or “platform society”.

Privacy Camp 2019 will unpack the implications of “platformisation” for the socio-political fabric, human rights and policy making. In particular, how does the platform logic shape our experiences and the world we live in? How do institutional actors attempt to regulate platforms? In what ways do the affordances and constraints of platforms shape how people share and make use of their data?

Participate!

We welcome panel proposals relating to the broad theme of platforms. Besides classic panel proposals we are also seeking short contributions for our workshop “Situating Platforms: User Narratives”.

1. Panel proposals

We are particularly interested in panel proposals on the following topics: platform economy and labour; algorithmic bias; democratic participation and social networks.

Submission guidelines:

  • Indicate a clear objective for your session, i.e. what would be a good outcome for you?
  • Indicate other speakers that could participate in your panel (and let us know which speaker has already confirmed, at least in principle, to participate).
  • Make it as participative as possible, think about how to include the audience and diverse actors. Note that the average panel length is 75 minutes.
  • Send us a description of no more than 400 words.

2. “Situating Platforms: User Narratives” submissions

In an effort to discuss situated contexts with regard to platforms, we will have a session on lived practices and user narratives. Individuals, civil society groups or community associations are welcome to contribute in the format of a short talk or show & tell demonstration. Details and the online submission form are here: [[link to submission form coming soon!]]

Deadline

The deadline for all submissions is 18 November. After the deadline, we will review your submission and let you know by the end of November whether your proposal can be included in the programme. It is possible that we suggest merging panel proposals if they are very similar.

Please send your proposal via email to privacycamp(at)edri.org!

If you have questions, please contact Kirsten at kirsten.fiedler(at)edri(dot)org or Imge at imge.ozcan(at)vub(dot)be.

About Privacy Camp

Privacy Camp is jointly organised by European Digital Rights (EDRi), the Institute for European Studies of the Université Saint-Louis – Bruxelles (USL-B), the Law, Science, Technology & Society research group of the Vrije Universiteit Brussel (LSTS-VUB), and Privacy Salon.

Participation is free. Registrations will open in early December.

Twitter_tweet_and_follow_banner

close
26 Sep 2018

Anatomy of an AI system – from the Earth’s crust to our homes

By SHARE Foundation

The Internet of Things (IoT) and the numerous devices that surround us and let us get through our daily routine with more convenience are becoming more advanced. A “smart” home is not a futuristic notion anymore – it is reality. However, there is another side to this convenient technology: the one that exploits material resources, human labor, and data.

In their latest research, Kate Crawford from New York University AI Now Institute, a research institute examining the social implications of artificial intelligence (AI), and Vladan Joler from EDRi member SHARE Foundation’s SHARE Lab have analysed the extraction of resources across time – represented as a visual description of the birth, life and death of a single Amazon Echo unit. The interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics, distribution, prediction and optimisation make the scale of this system almost beyond human imagining. The whole process is presented on a detailed large-resolution map.

It is easy to give Alexa a command – you just need to say “play music”, “read my last unread email” or “add milk to my shopping list” – but this small moment of convenience requires a vast planetary network, fuelled by the extraction of non-renewable materials, labour, and data. The scope is overwhelming: hard labour in mines for extracting the minerals that form the physical basis of information technologies, strictly controlled and sometimes dangerous hardware manufacturing and assembly processes in Chinese factories, outsourced cognitive workers in developing countries labelling AI training data sets, all the way to the workers at toxic waste dumps. All these processes create new accumulations of wealth and power, which are concentrated in a very thin social layer.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

These extractive processes have an enormous toll in terms of pollution and energy consumption, although it is not visible until you scratch the surface. Also, many aspects of human behaviour are being recorded, quantified into data and used to train AI systems and enclosed as “intellectual property”. Many of the assumptions about human life made by machine learning systems are narrow, normative and laden with errors, yet they are inscribing and building those assumptions into a new world, and will increasingly play a role in how opportunities, wealth, and knowledge are distributed.

Anatomy of an AI system
https://anatomyof.ai/

Map: Anatomy of an AI system
https://anatomyof.ai/img/ai-anatomy-map.pdf

(Contribution by Bojan Perkov, EDRi member SHARE Foundation, Serbia)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
26 Sep 2018

Five reasons to be concerned about the Council ePrivacy draft

By IT-Pol

On 19 October 2017, the European Parliament’s LIBE Committee adopted its report on the ePrivacy Regulation. The amendments improve the original proposal by strengthening confidentiality requirements for electronic communication services, and include a ban on tracking walls, legally binding signals for giving or refusing consent to online tracking, and privacy by design requirements for web browsers and apps. Before trilogue negotiations can start, the Council of the European Union (the Member States’ governments) must adopt its “general approach”. The Council Presidency, currently held by Austria, is tasked with securing a compromise among the Member States. This article analyses the most recent draft text from the Austrian Council Presidency 12336/18.

Further processing of electronic communications metadata

The current ePrivacy Directive only allows processing of electronic communications metadata for specific purposes given in the Directive, such as billing. The draft Council ePrivacy text in Article 6(2a) introduces further processing for compatible purposes similar to Article 6(4) of the General Data Protection Regulation (GDPR). This further processing must be based on pseudonymous data, profiling individual users is not allowed, and the Data Protection Authority must be consulted.

Despite these safeguards, this new element represents a huge departure from the current ePrivacy Directive, since the electronic communications service provider will determine what constitutes a compatible purpose. The proposal comes very close to introducing “legitimate interest” loophole as a legal basis for processing sensitive electronic communications metadata. Formally, the further processing must be subject to the original legal basis, but what this means in the ePrivacy context is not entirely clear, since the main legal basis is a specific provision in the Regulation, such as processing for billing or calculating interconnection payments or maintaining or restoring the security of electronic communications networks.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

An example of further processing could be tracking mobile phone users for “smart city” applications such as traffic planning or monitoring travel patterns of tourists via their mobile phone. Even though the purpose of the processing must be obtaining aggregate information, and not targeting individual users, metadata will still be retained for the individual users in identifiable form in order to link existing data records with new data records (using a persistent pseudonymous identifier). Therefore, it becomes a form of voluntary data retention. The mandatory safeguard of pseudonymisation does not prevent the electronic communications service provider from subsequently identifying individual users if law enforcement authorities obtain a court order for access to retained data on individual users.

Communications data only protected in transit

Whereas the text adopted by the European Parliament specifically amends the Commission proposal to ensure that electronic communications data is protected under the ePrivacy Regulation after it has been received, the Council text clarifies that the protection only applies in transit. After the communication has been received by the end-user, the GDPR applies, which gives the service provider much greater flexibility in processing the electronic communication data for other purposes. For a number of modern electronic communications services, storage of electronic communication data on a central server (instead of on the end-user device) is an integral part of the service. An example is the transition from SMS (messages are stored on the phone) to modern messenger services such as WhatsApp or Facebook Messenger (stored on a central server). This makes it important that the protection under the ePrivacy Regulation applies to electronic communications data after it has been received. The Council text fails to address this urgent need.

Tracking walls

The European Parliament introduced a ban on tracking walls, that is the practice of denying users access to a website unless they consent to processing of personal data via cookies (typically tracking for targeted advertising) that is not necessary for providing the service requested.

The Council text goes in the opposite direction by specifically allowing tracking walls in Recital 20 for websites where the content is provided without a monetary payment if the website visitor is presented with an alternative option without this processing (tracking). This could be a subscription to an online news publication. The net effect of this is that personal data will become a commodity that can be traded for access to online news media or other online services. On the issue of tracking walls and coerced consent, the Council ePrivacy text may actually provide a lower level of protection than Article 7(4) of the GDPR, which specifically seeks to prevent that personal data can become the counter-performance for a contract. This is contrary to the stated aim of the ePrivacy Regulation.

Privacy settings and privacy by design

The Commission proposal requires web browsers to offer the option of preventing third parties from storing information in the browser (terminal equipment) or processing information already stored in the browser. An example of this could be an option to block third party cookies. The Council text proposes to delete Article 10 on privacy settings. The effect of this is that fewer users will become aware of privacy settings that protect them from leaking information about their online behaviour to third parties and that software may be placed on the market that does not even offer the user the possibility of blocking data leakage to third parties.

Data retention

Article 15(1) of the current ePrivacy Directive allows Member States to require data retention in national law. Under the case law of the Court of Justice of the European Union (CJEU) in Digital Rights Ireland (joined cases C-293/12 and C-594/12) and Tele2 (joined cases C-203/15 and C-698/15), this data retention must be targeted rather than general and undifferentiated (blanket data retention). In the Commission proposal for the ePrivacy Regulation, Article 11 on restrictions is very similar to Article 15(1) of the current Directive.

In the Council text, Article 2(2)(aa) excludes activities concerning national security and defence from the scope of the ePrivacy Regulation. This includes processing performed by electronic communications service providers when assisting competent authorities in relation to national security or defence, for example retaining metadata (or even communications content) that would otherwise be erased or not generated in the first place. The effect of this is that data retention for national security purposes would be entirely outside the scope of the ePrivacy Regulation and, potentially, the case law of the CJEU on data retention. This circumvents a key part of the Tele2 ruling where the CJEU notes (para 73) that the protection under the ePrivacy Directive would be deprived of its purpose if certain restrictions on the rights to confidentiality of communication and data protection are excluded from the scope of the Directive.

If data retention (or any other processing) for national security purposes is outside the scope of the ePrivacy Regulation, it is unclear whether such data retention is instead subject to the GDPR, and must satisfy the conditions of GDPR Article 23 (which is very similar to Article 11 of the proposed ePrivacy Regulation), or whether it is completely outside the scope of EU law. The Council text would therefore create substantial legal uncertainty for data retention in Member States’ national law, undoubtedly to the detriment of the fundamental rights of many European citizens.

Proposal for a Regulation concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC – Examination of the Presidency text (20.09.2018)
www.parlament.gv.at/PAKT/EU/XXVI/EU/03/55/EU_35516/imfname_10840532.pdf

e-Privacy: What happened and what happens next (29.11.2017)
https://edri.org/e-privacy-what-happened-and-what-happens-next/

EU Member States fight to retain data retention in place despite CJEU rulings (02.05.2018)
https://edri.org/eu-member-states-fight-to-retain-data-retention-in-place-despite-cjeu-rulings/

EU Council considers undermining ePrivacy (25.07.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Civil society letter to WP TELE on the ePrivacy Regulation (24.09.2018)
https://edri.org/files/eprivacy/201809-LettertoCouncil_FINAL.pdf

(Contribution by Jesper Lund, EDRi member IT-Pol, Denmark)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
29 Aug 2018

What’s your trustworthiness according to Facebook? Find out!

By Bits of Freedom

On 21 August 2018 it was revealed that Facebook rates the trustworthiness of its users in its attempt to tackle misinformation. But how does Facebook judge you, what are the consequences and… how do you score? Ask Facebook by exercising your access right!

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

Your reputation is 0 or 1

In an interview with the Washington Post, the product manager who is in charge of fighting misinformation at Facebook, said that one of the factors the company uses to determine if you’re spreading “fake news”, is a so-called “trustworthiness score”. (Users are assigned a score of 0 or 1.) In addition to this score, Facebook apparently also uses many other indicators to judge its users. For example, it takes into account if you abuse the option to flag messages.

Lots of questions

The likelihood of you spreading misinformation (whatever that means) appears to be decided by an algorithm. But how does Facebook determine a user’s score? For which purposes will this score be used and what if the score is incorrect?

Facebook has objected to the description of this system as reputation rating. To the BBC a spokesperson responded: “The idea that we have a centralised ‘reputation’ score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading.”

It’s unclear exactly how the headline is misleading, because if you’d turn it into a question “Is Facebook rating the trustworthiness of its users?” the answer would be yes. In any event, the above questions remain unanswered. That is unacceptable, because Facebook is not just any old actor. Together with a handful of other tech giants, the company plays an important role in how we communicate and which information we send and receive. The decisions Facebook makes about you have impact. Therefore, assigning you a trustworthiness score comes with great responsibility.

Facebook has to share your score with you

At the very least, such a system should be fair and transparent. If mistakes are made, there should be an easy way for users to have those mistakes rectified. According to Facebook, however, this basic level of courtesy is not possible, because it could lead to people gaming the system.

However, with the new European privacy rules (GDPR) in force, Facebook cannot use this reason as an excuse for dodging these important questions and keeping its trustworthiness assessment opaque. As a Facebook user living in the EU, you have the right to access the personal data Facebook has about you. If these data are incorrect you have the right to rectify them.

Assuming that your trustworthiness score is the result of an algorithm crunching the data Facebook collects about you, and taking into account that this score can have a significant impact, you also have the right to receive meaningful information about the underlying logic of your score and you should be able to contest your score.

Send an access request

Do you live in the European Union and do you want to exercise your right to obtain your trustworthiness score? Send an access request to Facebook! You can send your request by post, email or by using Facebook’s online form. To help you with exercising your access right, Bits of Freedom created a request letter for you. You can find it here.

Read more:

Example of request letter to send by regular mail (.odt file download link)
https://www.bof.nl/wp-content/uploads/2018/08/facebook-access-request-trustworthiness-assessment-physical-mail.odt

Example text to use for email / online form (.odt file download link)
https://www.bof.nl/wp-content/uploads/2018/08/facebook-access-request-trustworthiness-assessment-form-or-email.odt

Don’t make your community Facebook-dependent! (21.02.2018)
https://edri.org/dont-make-your-community-facebook-dependent/

Press Release: “Fake news” strategy needs to be based on real evidence, not assumption (26.04.2018)
https://edri.org/press-release-fake-news-strategy-needs-based-real-evidence-not-assumption/

(Contribution by David Korteweg, EDRi member Bits of Freedom, the Netherlands)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

 

close
25 Jul 2018

New Protocol on cybercrime: a recipe for human rights abuse?

By EDRi

From 11 to 13 July 2018, the Electronic Frontier Foundation (EFF) and European Digital Rights (EDRi) took part in the Octopus Conference 2018 at the Council of Europe together with Access Now to present the views of a global coalition of civil society groups on the negotiations of more than 60 countries on access to electronic data by law enforcement in the context of criminal investigations.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

There is a global consensus that mutual legal assistance among countries needs to be improved. However, recognising its inefficiencies should not translate into bypassing Mutual Legal Assistance Treaties (MLATs) by going to service providers directly, thereby losing procedural and human rights safeguards embedded in them. Some of the issues with MLATs can be solved by, for example, technical training for law enforcement authorities, simplification and standarisation of forms, single points of contact or by increasing resources. For instance, thanks to a recent US “MLAT reform programme” that increased resources to handle MLATs, the US Department of Justice reduced the amount of pending cases by a third.

There is a worrisome legislative trend  emerging through the US CLOUD Act and the European Commission’s “e-evidence” proposals to access data directly from service providers. This trend risks creating a race to the bottom in terms of due process, court checks, fair trials, privacy and other human rights safeguards.

If the current Council of Europe negotiations on cybercrime focused on improving mutual legal assistance, they could offer an opportunity to create a human rights-respecting alternative to dangerous shortcuts as proposed in the US CLOUD Act or the EU proposals. However, civil right groups have serious concerns from a procedural and substantive perspective.

This process is being conducted without regular and inclusive participation of civil society, or data protection authorities. Nearly 100 NGOs wrote in April 2018 to the Council of Europe’s Secretary General because they are not duly included in the process. While the Council of Europe issued a response, civil society groups reiterated that civil society participation and inclusion goes beyond a public consultation, participation in a conference and comments on texts preliminary agreed by States. Human rights NGOs should be present in drafting meetings to learn from the law enforcement expertise of the 60+ countries and provide human rights expert input in a timely manner.

From a substantive point of view, the process is being built on the faulty premise that anticipated signatories to the Convention on cybercrime (“the Budapest Convention”) share a common understanding on basic protections of human rights and legal safeguards. As a result of this presumption, it is unclear how the proposed Protocol can provide for strong data protection and critical human rights vetting mechanisms that are embedded in the current MLAT system.

One of the biggest challenges in the Council of Europe process to draft an additional protocol to the Cybercrime convention – a challenge that was evident in the initial Cybercrime convention itself and in its article 15 in particular – is the assumption that signatory Parties share (and will continue to share) a common baseline of understanding with respect to the scope and nature of human rights protections, including privacy.

Unfortunately, there is neither a harmonised legal framework among the countries participating in the negotiations nor a shared human rights understanding. Experience shows that there is a need for countries to bridge the gap between national legal frameworks and practices on the one hand, and human rights standards established by case law of the highest courts on the other. For example, the Court of Justice of the European Union (CJEU) held that blanket data retention is illegal under EU law on several occasions. Yet, the majority of the EU Member States still have blanket data retention laws in place. Other states involved in the protocol negotiations have implemented precisely the type of sweeping, unchecked, and indiscriminate data retention regime that the CJEU ruled out as well, such as Australia, Mexico or Colombia.

As a result of a lack of a harmonised human rights and legal safeguards protection, the forthcoming protocol proposals risk:

– Bypassing critical human rights vetting mechanisms inherent in the current MLAT system that are currently used to, among other things, navigate conflicts in fundamental human rights and legal safeguards that inevitably arise between countries;

– Seeking to encode practices that fall below minimum standards being established in various jurisdictions by ignoring human rights safeguards established primarily by the case law of the European Court of Human Rights, the Court of Justice of the European Union, among others;

– Including few substantial limits and instead relying on the legal systems of signatories to include enough safeguards to ensure human rights are not violated in cross-border access situations and a general and non-specific requirement that signatories ensure adequate safeguards (see Article 15 of the Cybercrime Convention) without any enforcement.

Parties to the negotiations should render human rights safeguards operational – as human rights are the cornerstones of our society. As a starting point, NGOs urge countries to sign, ratify and diligently implement Convention 108+ on data protection. In this sense, EDRi and EFF welcome the comments of the Council of Europe’s Convention 108 Committee.

Finally, civil society groups urge the forthcoming protocol not to engage in a mandatory or voluntary direct access mechanism to obtain data from companies directly without appropriate safeguards. While the proposals seem to be limited to subscriber data, there are serious risks that interpretation of what constitutes subscriber data is expanded so as to lower safeguards, including access to metadata directly from providers by non-judicial requests or demands.

This can conflict clear court rulings from the European Court of Human Rights, such as the Benedik v. Slovenia case or even States’ case law, such as that of Canada’s Supreme Court. The global NGO coalition therefore reiterates that the focus should be put on making mutual legal assistance among countries more efficient.

Civil society is ready to engage in the negotiations. Until now however, the future of the second additional protocol to the Cybercrime Convention remains unclear, raising many concerns and questions.

Read more:

Joint civil society response to discussion guide on a 2nd Additional Protocol to the Budapest Convention on Cybercrime (28.06.2018)
https://edri.org/files/consultations/globalcoalition-civilsocietyresponse_coe-t-cy_20180628.pdf

How law enforcement can access data across borders — without crushing human rights (04.07.2018)
https://ifex.org/digital_rights/2018/07/04/coe_convention_185_2ndamend_supletter/

Nearly 100 public interest organisations urge Council of Europe to ensure high transparency standards for cybercrime negotiations (03.04.2018)
https://edri.org/global-letter-cybercrime-negotiations-transparency/

A Tale of Two Poorly Designed Cross-Border Data Access Regimes (25.04.2018)
https://www.eff.org/deeplinks/2018/04/tale-two-poorly-designed-cross-border-data-access-regimes

Cross-border access to data has to respect human rights principles (20.09.2017)
https://edri.org/crossborder-access-to-data-has-to-respect-human-rights-principles/

(Contribution by Maryant Fernández Pérez, EDRi, and Katitza Rodríguez, EFF)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
21 Jun 2018

ENAR and EDRi join forces for diligent and restorative solutions to illegal content online

By Maryant Fernández Pérez

The European Network Against Racism (ENAR) and European Digital Rights (EDRi) joined forces to draw up some core principles in the fight against illegal content online. Our position paper springs both from the perspective of victims of racism and that of free speech and privacy protection.

The European Commission has so far not been successful in tackling illegal content in a way that provides a redress mechanism for victims. In fact, the European Commission has been way too long focused on a “public relations regime” on how quickly and how many online posts have been deleted, while not having a diligent approach for addressing the deeper problems behind the removed content. Indeed, the European Commission has been continuously promoting rather superficial “solutions” that are not dealing with the problems faced by victims of illegal activity in a meaningful way.

At the same time, the European Commission’s approach is undermining people’s rights to privacy and freedom of expression by urging and pressuring internet giants to take over privatised law enforcement functions. As a consequence, ENAR and EDRi have agreed a joint position paper following our commitment to ensure fundamental rights for all.

Our joint position paper relies on four basic principles:

1. No place for arbitrary restrictions – Any measure that is implemented must be predictable and subject to real accountability.

2. Diligent review processes – Any measure must be implemented on the basis of neutral assessment, rather than being left entirely to private parties, particularly as they may have significant conflicts of interest.

3. Learning lessons – Any measure implemented must be subject to thorough evidence-gathering and review processes.

4. Different solutions for different problems – No superficial measure in relation to incitement to violence or hatred should be implemented without clear obligations on all relevant stakeholders to play their role in dealing with the content in a comprehensive manner. Illegal racist content inciting to violence or discrimination should be referred to competent and properly resourced law enforcement authorities for adequate sanctions if they meet the criminal threshold. States must also ensure that laws on racism and incitement to violence are based on solid evidence and respect international human rights law.

This paper follows cooperation between the two organisations over the past few years to bring the digital rights community and the anti-racist movement together in a more comprehensive way. The common initiative comes at a time where the European Commission is consulting stakeholders and individuals to provide their opinion on how to tackle illegal content online by 25 June 2018. EDRi has developed an answering guide for individuals that consider that the European Union should take a diligent, long-term approach that protects for the victims of illegal content, such as racism online, and victims of free speech restrictions.

(Contribution by Maryant Fernández Pérez, EDRi Senior Policy Advisor)

Read more:

ENAR-EDRi Joint position paper: Tackling illegal content online – principles for efficient and restorative solutions (20.06.2018)
https://edri.org/files/enar-edri_illegalcontentposition_final_20180620.pdf

EDRi Answering guide to EU Commission’s “illegal” content “consultation” (13.06.2018)
https://edri.org/answering-guide-eu-commission-illegal-content-consultation/

Commission’s position on tackling illegal content online is contradictory and dangerous for free speech (28.09.2017)
https://edri.org/commissions-position-tackling-illegal-content-online-contradictory-dangerous-free-speech/

EU Commission’s Recommendation: Let’s put internet giants in charge of censoring Europe (28.09.2017)
https://edri.org/eu-commissions-recommendation-lets-put-internet-giants-in-charge-of-censoring-europe/

close