Blogs | Privacy and data protection | Data protection standards | Online tracking industry / AdTech | Privacy and confidentiality | Profiling practices

ePrivacy: Public benefit or private surveillance?

92 weeks after the proposal was published, the EU is still waiting for an ePrivacy Regulation. The Regulation is supposed to replace the current ePrivacy Directive, aligning it with the General Data Protection Regulation (GDPR).

By EDRi · October 24, 2018

While the GDPR regulates the ways in which personal data is processed in general, the ePrivacy Regulation specifically regulates the protection of privacy and confidentiality of electronic communications. The data in question not only includes the content and the “metadata” (data on when, where and to whom a person communicated) of communications, but also other identifiers such as “cookies” that are stored on users’ computers. To make the legislation fit for its purpose in regard to technological developments, the European Commission (EC) proposal addresses some of the major changes in communications of the last decade, including the use of so-called “over the top” services, such as WhatsApp and Viber.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

The Regulation is currently facing heavy resistance from certain sectors of the publishing and behavioural advertising industry. After an improved text was adopted by the European Parliament (EP), it is now being delayed at the Council of the European Union level, where EU Member States are negotiating the text.

One of the major obstacles in the negotiations is the question to what extent providers such as telecommunication companies can use metadata for other purposes than the original service. Some private companies – the same ones that questioned the need of consent from users in the GDPR – now re-wrapped their argument saying that an “over-reliance” on consent would substantially hamper future technologies. Over-reliance on anything is not good, by definition, as is under-reliance, but such sophistry is a mainstay of lobby language.

However, this lobby attack omits reference to the fact that compatible further processing would not lead only to benign applications in the public interest: Since the proposal does not limit further processing to statistical or research purposes, it could just as well be used for commercial purposes such as commercial or political manipulation. But even with regard to the potentially more benevolent applications of AI, it should be kept in mind that automated data processing has in some cases shown to be highly detrimental to parts of society, especially vulnerable groups. This should not be ignored when evaluating the safety and privacy of aggregate data. For instance, while using location data for “smart cities” can make sense in some narrowly-defined circumstances when it is used for traffic control or natural disaster management, it gains a much more chilling undertone when it leads for instance to racial discrimination in company delivery services or law enforcement activities. It is easily imaginable that metadata, one of the most revealing and easiest to process forms of personal data, could be used for equally crude or misaligned applications, yielding highly negative outcomes for vulnerable groups. Moreover, where aggregate, pseudonymised data produces adverse outcomes for an individual, not even a rectification or deletion of the person’s data will lead to an improvement, as long as the accumulated data of similar individuals is still available.

Another pitfall of the supposedly private, ostensibly pseudonymised way of processing is that even if individual users are not targeted, companies may need to maintain the metadata of citizens in identifiable form to link existing data sets with new ones. This could essentially lead to a form of voluntary data retention, which might soon attract the interest of public security actors rapaciously seeking new data sources and new powers. If such access was granted, individuals would essentially be identifiable. Even retaining “only” aggregate data for certain societal groups or minorities might often already be enough to spark discriminatory treatment.

Although the Austrian Presidency of the Council of the European Union did include in their most recent draft compromise some noteworthy safeguards for compatible further processing, most notably the necessity to consult the national Supervisory Authority or to conduct a data protection impact assessment, the current proposal does not adequately empower individuals. Given that the interpretation of what is a “compatible” further processing may vary significantly among Member States (which would lead to years of litigation), it should be up to citizens to decide (and for the industry to prove) which forms of metadata processing are safe, fair and beneficial in society.

Five Reasons to be concerned about the Council ePrivacy draft (26.09.2018)
https://edri.org/five-reasons-to-be-concerned-about-the-council-eprivacy-draft/

EU Council considers undermining ePrivacy (25.07.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Your ePrivacy is nobody else’s business (30.05.2018)
https://edri.org/your-eprivacy-is-nobody-elses-business/

e-Privacy revision: Document pool (10.01.2017)
https://edri.org/eprivacy-directive-document-pool/

(Contribution by Yannic Blaschke, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner