10 Oct 2019

Open letter to EU Member States: Deliver ePrivacy now!

By EDRi

On 11 October 2019, EDRi, together with four other civil society organisations, sent an open letter to EU Member States, to urge to conclude the negotiations on the ePrivacy Regulation. The letter highlights the urgent need for a strong ePrivacy Regulation in order to tackle the problems created by the commercial surveillance business models, and expresses the deep concerns by the fact that the Member States, represented in the Council of the European Union, still have not made decisive progress, more than two and a half years since the Commission presented the proposal.

You can read the letter here (pdf) and below:

Open letter to EU Member States
11.10.2019

Dear Minister,

We, the undersigned organisations, urge you to swiftly reach an agreement in the Council of the European Union on the draft ePrivacy Regulation.

We are deeply concerned by the fact that, more than two and a half years since the Commission presented the proposal, the Council still has not made decisive progress. Meanwhile, one after another, privacy scandals are hitting the front pages, from issues around the exploitation of data in the political context, such as “Cambridge Analytica”, to the sharing of sensitive health data. In 2019, for example, an EDRi/CookieBot report demonstrated how EU governments unknowingly allow the ad tech industry to monitor citizens across public sector websites.1 An investigation by Privacy International revealed how popular websites about depression in France, Germany and the UK share user data with advertisers, data brokers and large tech companies, while some depression test websites leak answers and test results to third parties.2

A strong ePrivacy Regulation is necessary to tackle the problems created by the commercial surveillance business models. Those business models, which are built on tracking and cashing in on people’s most intimate moments, have taken over the internet and create incentives to promote disinformation, manipulation and illegal content.

What Europe gains with a strong ePrivacy Regulation

The reform of the current ePrivacy Directive is essential to strengthen – not weaken – individuals’ fundamental rights to privacy and confidentiality of communications.3 It is necessary to make current rules fit for the digital age.4 In addition, a strong and clear ePrivacy Regulation would push Europe’s global leadership in the creation of a healthy digital environment, providing strong protections for citizens, their fundamental rights and our societal values. All this is key for the EU to regain its digital sovereignty, one of the goals set out by Commission President-elect Ursula von der Leyen in her political guidelines.5

Far from being an obstacle to the development of new technologies and services, the ePrivacy Regulation is necessary to ensure a level playing field and legal certainty for market operators.6 It is an opportunity for businesses7 to innovate and invest in new, privacy-friendly, business models.

What Europe loses without a strong ePrivacy Regulation

Without the ePrivacy Regulation, Europe will continue living with an outdated Directive which is not being properly enforced8 and the completion of our legal framework initiated with the General Data Protection Regulation (GDPR) will not be achieved. Without a strong Regulation, surveillance-driven business models will be able to cement their dominant positions9 and continue posing serious risks to our democratic processes.10 11 The EU also risks losing the position as global standard-setter and digital champion that it earned though the adoption of the GDPR.

As a result, people’s trust in internet services will continue to fall. According to the Special Eurobarometer Survey of June 2019 the majority of users believe that they only have partial control over the information they provide online, with 62% of them being concerned about it.

The ePrivacy Regulation is urgently needed

We expect the EU to protect people’s fundamental rights and interests against practices that undermine the security and confidentiality of their online communications and intrude in their private lives.

As you meet today to discuss the next steps of the reform, we urge you to finally reach an agreement to conclude the negotiations and deliver an upgraded and improved ePrivacy Regulation for individuals and businesses. We stand ready to support your work.

Yours sincerely,

AccessNow
The European Consumer Organisation (BEUC)
European Digital Rights (EDRi)
Privacy International
Open Society European Policy Institute (OSEPI)

1 https://www.cookiebot.com/media/1121/cookiebot-report-2019-medium-size.pdf
2
https://privacyinternational.org/long-read/3194/privacy-international-study-shows-your-mental-health-sale
3
https://edpb.europa.eu/our-work-tools/our-documents/outros/statement-32019-eprivacy-regulation_en
4
https://www.beuc.eu/publications/beuc-x-2017-090_eprivacy-factsheet.pdf
5
https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf
6
https://edpb.europa.eu/our-work-tools/our-documents/outros/statement-32019-eprivacy-regulation_en
7 https://www.beuc.eu/publications/beuc-x-2018-108-eprivacy-reform-joint-letter-consumer-organisations-ngos-internet_companies.pdf
8
https://edri.org/cjeu-cookies-consent-or-be-tracked-not-an-option/
9
http://fortune.com/2017/04/26/google-facebook-digital-ads/
10
https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook
11
https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy

Read more:

Open letter to EU Member States on ePrivacy (11.10.2019)
https://edri.org/files/eprivacy/ePrivacy_NGO_letter_20191011.pdf

Right a wrong: ePrivacy now! (09.10.2019)
https://edri.org/right-a-wrong-eprivacy-now/

Civil society calls Council to adopt ePrivacy now (05.12.2018)
https://edri.org/civil-society-calls-council-to-adopt-eprivacy-now/

ePrivacy reform: Open letter to EU member states (27.03.2018)
https://edri.org/eprivacy-reform-open-letter-to-eu-member-states/

close
09 Oct 2019

Right a wrong: ePrivacy now!

By Ella Jakubowska

When the European Commission proposed to replace the outdated and improperly enforced 2002 ePrivacy Directive with a new ePrivacy Regulation in January 2017, it marked a cautiously hopeful moment for digital rights advocates across Europe. With the backdrop of the General Data Protection Regulation (GDPR), adopted in May 2018, Europe took a giant leap ahead for the protection of personal data. Yet by failing to adopt the only piece of legislation protecting the right to privacy and to the confidentiality of communications, the Council of the European Union seems to have prioritised private interests over the fundamental rights, securities and freedoms of citizens that would be protected by a strong ePrivacy Regulation.

This is not an abstract problem; commercial surveillance models – where businesses exploit user data as a key part of their business activity – pose a serious threat to our freedom to express ourselves without fear. This model relies on profiling, essentially putting people into the boxes in which the platforms believe they belong – which is a very slippery slope towards discrimination. And when children increasingly make up a large proportion of internet users, the risks become even more stark: their online actions could impact their access to opportunities in the future. Furthermore, these models are set up to profit from the mass sharing of content, and so platforms are perversely incentivised to promote sensationalist posts that could harm democracy (for example political disinformation).

The rise of highly personalised adverts (”microtargeting”) means that online platforms increasingly control and limit the parameters of the world that you see online, based on their biased and potentially discriminatory assumptions about who you are. And as for that online quiz about depression that you took? Well, that might not be as private as you thought.

It is high time that the Council of the European Union takes note of the risks to citizens caused by the current black hole where ePrivacy legislation should be. Amongst the doom and gloom, there are reasons to be optimistic. If delivered in its strongest form, an improved ePrivacy Regulation helps to complement the GDPR; will ensure compliance with essential principles such as privacy by design and by default; will tackle the perversive model of online tracking and the disinformation it creates; and it will give power back to citizens over their private life and interests. We urge the Council to swiftly update and adopt a strong, citizen-centered ePrivacy Regulation.

e-Privacy revision: Document pool: Document pool
https://edri.org/eprivacy-directive-document-pool/

ePrivacy: Private data retention through the back door (22.05.2019)
https://edri.org/eprivacy-private-data-retention-through-the-back-door/

Captured states – e-Privacy Regulation victim of a “lobby onslaught” (23.05.2019)
https://edri.org/coe-eprivacy-regulation-victim-of-lobby-onslaught/

NGOs urge Austrian Council Presidency to finalise e-Privacy reform (07.11.2018)
https://edri.org/ngos-open-letter-austrian-council-presidency-eprivacy/

e-Privacy: What happened and what happens next (29.11.2017)
https://edri.org/e-privacy-what-happened-and-what-happens-next/

(Contribution by Ella Jakubowska, EDRi intern)

close
09 Oct 2019

Why weak encryption is everybody’s problem

By Ella Jakubowska

Representatives of the UK Home Department, US Attorney General, US Homeland Security and Australian Home Affairs have joined forces to issue an open letter to Mark Zuckerberg. In their letter of 4 October, they urge Facebook to halt plans for end-to-end (aka strong) encryption across Facebook’s messaging platforms, unless such plans include “a means for lawful access to the content of communications”. In other words, the signatories are requesting what security experts call a “backdoor” for law enforcement to circumvent legitimate encryption methods in order to access private communications.

The myth of weak encryption as safe

Whilst the US, UK and Australia are adamant that their position enhances the safety of citizens, there are many reasons to be skeptical of this. The open letter uses emotive language to emphasise the risk of “child sexual exploitation, terrorism and extortion” that the signatories claim is associated with strong encryption, but fails to give a balanced assessment which includes the risks to privacy, democracy and most business transactions of weak encryption. By positioning weak encryption as a “safety” measure, the US, UK and Australia imply (or even explicitly state) that supporters of strong encryption are supporting crime.

Government-led attacks on everybody’s digital safety aren’t new. Since the 1990s, the US has tried to prevent the export of strong encryption and—when that failed—worked on forcing software companies to build backdoors for the government. Those attempts were called the first “Cryptowars”.

In reality, however, arguing that encryption mostly helps criminals is like saying that vehicles should be banned and all knives blunt because both have been used by criminals and terrorists. Such reasoning ignores that in the huge majority of cases strong encryption greatly enhances people’s safety. From enabling secure online banking, to keeping citizens’ messages private, internet users and companies rely on strong encryption every single day. It is the foundation of trusted, secure digital infrastructure. Weak encryption, on the other hand, is like locking the front door of your home, only to leave the back one open. Police may be able to enter more easily – but so too can criminals.

Strong encryption is vital for protecting civil rights

The position outlined by the US, UK and Australia is fundamentally misleading. Undermining encryption harms innocent citizens. Encryption already protects some of the most vulnerable people worldwide – journalists, environmental activists, human rights defenders, and many more. State interception of private communications is frequently not benign: government hacking can and does lead to egregious violations of fundamental rights.

For many digital rights groups, this debate is the ultimate groundhog day, and valuable effort is expended year after year on challenging the false dichotomy of “privacy versus security”. Even the European Commission has struggled to sort fact from fear-mongering.

However, it is worth remembering that Facebook’s announcement to encrypt some user content is so far just that: an announcement. The advertisement company’s approach to privacy is a supreme example of surveillance capitalism: protecting some users when it is favourable for their PR, and exploiting user data when there is a financial incentive to do so. To best protect citizens’ rights, we need a concerted effort between policy-makers and civil society to enact laws and build better technology so that neither our governments nor social media platforms can exploit us and our personal data.

The bottom line

Facebook must refuse to build anything that could constitute a backdoor into their messaging platforms. Otherwise, Facebook is handing the US, UK and Australian governments a surveillance-shaped skeleton key that puts Facebook users at risk worldwide. And once that door is unlocked, there will be no way to control who will enter.

EDRi Position paper on encryption: High-grade encryption is essential for our economy and our democratic freedoms (25.01.2015)
https://www.edri.org/files/20160125-edri-crypto-position-paper.pdf

Encryption – debunking the myths (03.05.2017)
https://www.edri.org/files/20160125-edri-crypto-position-paper.pdf

Encryption Workarounds: a digital rights perspective (12.09.2017)
https://edri.org/files/encryption/workarounds_edriposition_20170912.pdf

(Contribution by Ella Jakubowska, EDRi intern)

close
25 Sep 2019

Why EU passenger surveillance fails its purpose

By Epicenter.works

The EU Directive imposing the collection of flyers’ information (Passenger Name Record, PNR) was adopted in April 2016, the same day as the General Data Protection Regulation (GDPR). The collection of PNR data from all flights going in and out of Brussels has a strong impact on the right of privacy of individuals and it needs to be justified on the basis of necessity and proportionality, and only if it meets objectives of general interest. All of this lacks in the current EU PNR Directive, which is at the moment being implemented in the EU.

The Austrian implementation of the PNR Directive

In Austria, the Austrian Passenger Information Unit (PIU) has processed PNR since March 2019. On 9 July 2019, the Passenger Data central office (Fluggastdatenzentralstelle) issued a response to inquiries into PNR implementation in Austria. According to the document, from February 2019 to 14 May, 7 633 867 records had been transmitted to the PIU. On average, about 490 hits per day are reported, with an average of about 3 430 hits per week requiring further verification. According to the document, out of the 7 633 867 reported records, there were 51 confirmed matches and in 30 cases there was the intervention by staff at the airport concerned.

Impact on innocents

What this small show of success does not capture, however, is the damage inflicted on the thousands of innocent passengers who are wrongly flagged by the system and who can be subjected to damaging police investigations or denied entry into destination countries without proper cause. Mass surveillance that seeks a small, select population is invasive, inefficient, and counter to fundamental rights. It subjects the majority of people to extreme security measures that are not only ineffective at catching terrorists and criminals, but that undermine privacy rights and can cause immense personal damage.

Why is this happening? The rate fallacy

Imagine a city with a population of 1 000 000 people implements surveillance measures to catch terrorists. This particular surveillance system has a failure rate of 1%, meaning that (1) when a terrorist is detected, the system will register it as a hit 99% of the time, and fail to do so 1% of the time and (2) that when a non-terrorist is detected, the system will not flag them 99% of the time, but register the person as a hit 1% of the time. What is the probability that a person flagged by this system is actually a terrorist?

At first, it might look like there is a 99% chance of that person being a terrorist. Given the system’s failure rate of 1%, this prediction seems to make sense. However, this is an example of incorrect intuitive reasoning because it fails to take into account the error rate of hit detection.

This is based on the rate fallacy: The base rate fallacy is the tendency to ignore base rates – actual probabilities – in the presence of specific, individuating information. Rather than integrating general information and statistics with information about an individual case, the mind tends to ignore the former and focus on the latter. One type of base rate fallacy is the one we suggested above called the false positive paradox, in which false positive tests are more probable than true positive tests. This result occurs when the population overall has a low incidence of a given condition and the true incidence rate of the condition is lower than the false positive rate. Deconstructing the false positive paradox shows that the true chance of this person being a terrorist is closer to 1% than to 99%.

In our example, out of one million inhabitants, there would be 999 900 law-abiding citizens and 100 terrorists. The number of true positives registered by the city’s surveillance numbers 99, with the number of false positives at 9 999 – a number that would overwhelm even the best system. In all, 10 098 people total – 9 999 non-terrorists and 99 actual terrorists – will trigger the system. This means that, due to the high number of false positives, the probability that the system registers a terrorist is not 99% but rather is below 1%. Searching in large data sets for few suspects means that only a small number of hits will ever be genuine. This is a persistent mathematical problem that cannot be avoided, even with improved accuracy.

Security and privacy are not incompatible – rather there is a necessary balance that must be determined by a society. The PNR system, by relying on faulty mathematical assumptions, ensures that neither security nor privacy are protected.

Epicenter.works
https://en.epicenter.works/

PNR – Passenger Name Record
https://en.epicenter.works/thema/pnr-0

Passenger surveillance brought before courts in Germany and Austria (22.05.2019)
https://edri.org/passenger-surveillance-brought-before-courts-in-germany-and-austria/

We’re going to overturn the PNR directive (14.05.2019)
https://en.epicenter.works/content/were-going-to-overturn-the-pnr-directive-0

NoPNR – We are taking legal action against the mass processing of passenger data!
https://nopnr.eu/en/home/

An Explainer on the Base Rate Fallacy and PNR (22.07.2019)
https://en.epicenter.works/content/an-explainer-on-the-base-rate-fallacy-and-pnr

(Contribution by Kaitlin McDermott, EDRi-member Epicenter.works, Austria)

close
23 Jul 2019

Your family is none of their business

By Andreea Belu
  • Today’s children have the most complex digital footprint in human history, with their data being collected by private companies and governments alike.
  • The consequences on a child’s future revolve around one’s freedom to learn from mistakes, the reputation damage caused by past mistakes, and the traumatic effects of discriminatory algorithms.

Summer is that time of the year when parents get to spend more time with their children. Often enough, this also means children get to spend more time with electronic devices, their own or their parents’. Taking a selfie with the little one, or keeping them busy with a Facebook game or a Youtube animations playlist – these are examples that make the digital footprint of today’s child the largest in human history.

Who wants your child’s data?

Mobile phones, tablets and other electronic devices can open the door for the exploitation of the data about the person using that device – how old they are, what race they are, where are they located, what websites they visit etc. Often enough, that person is a child. But who would want a child’s data?

Companies that develop “smart” toys are the first example. In the past year, they’ve been in the spotlight for excessively collecting, storing and mis-handling minors’ data. Perhaps you still remember the notorious case of “My Friend Cayla”, the “smart” doll that was proved to record the conversations between it and children, and share them with advertisers. In fact, the doll was banned in Germany as an illegal “hidden espionage device”. However, the list of “smart” technologies collecting children data is long. Another example of a private company mistreating children’s data was the case of Google offering its school products to young American students and tracking them across their different (home) devices to train other Google products. A German DPA (Data Protection Authority) decided to ban Microsoft Office 365 from schools over privacy concerns.

Besides private companies, state authorities have an interest to record, store and use children’s online activity. For example, a Big Brother Watch 2018 report points that in the United Kingdom “Department for Education (DfE) demands a huge volume of data about individual children from state funded schools and nurseries, three times every year in the School Census, and other annual surveys.” Data collected by schools (child’s name, birth date, ethnicity, school performance, special educational needs and so on) is combined with social media profile or other data (e.g household data) bought from data brokers. Why linking all these records? Local authorities wish to focus more on training algorithms that predict children’s behaviour in order to identify “certain” children prone to gang affiliations or political radicalisation.

Consequences for a child’s future

Today’s children have the biggest digital footprint out of all humans in human history. Sometimes, the collection of a child’s data starts even before they are born, and this data will increasingly determine their future. What does this mean for kids’ development and their life choices?

The extensive data collection of today’s children aims at neutralising behavioural “errors” and optimising their performance. But mistakes are valuable during a child’s self-development – committing errors and learning lessons is an important complementary to receiving knowledge from adults. In fact, a recent psychology study shows that failure to provide an answer to a test is benefiting the learning process. Constantly using algorithms to optimise performance based on a child’s digital footprint will damage the child’s right to make and learn from mistakes.

Click to watch the animation

A child’s mistakes are not only a source of important lessons. With a rising number of attacks targeted at school’s IT systems, children’s data can get in the wrong hands. Silly mistakes could also be used to damage the reputation of the future adult a child grows into. Some mistakes must be forgotten. However, logging every step in a child’s development increases the risk that the past mistakes are later used against them.

More, children’s data can contribute to them being discriminated against. As mentioned above, data is used to predict child behaviour, with authorities aiming to intervene where they consider necessary. But algorithms portray human biases, for example against people of colour. What happens when a child of colour is predicted to be at risk of gang affiliation? Reports show that authorities treat children in danger to be recruited by a gang as if they were part of the gang already. Therefore, racial profiling by algorithms can turn into a traumatic experience for a child.

EDRi is actively trying to protect you and your beloved ones

European Digital Rights is a network of 42 organisations that promote the respect of privacy and other human rights online.

Our free “Digital Defenders” booklet for children (available in many languages) teaches in a fun and practical way why and how to protect our privacy online. EDRi is also working on the ongoing reform of the online privacy (ePrivacy) rules. This reform has a great potential to diminish practices of data exploitation online.

Read more:

Privacy for Kids: Your guide to Digital Defenders vs. Data Intruders (free download)
https://edri.org/papers

DefendDigitalMe: a call to action to protect children’s rights to privacy and family life.
https://defenddigitalme.com/

Blogpost series: Your privacy, security and freedom online are in danger (14.09.2016)
https://edri.org/privacy-security-freedom/

e-Privacy revision: Document pool (10.01.2017)
https://edri.org/eprivacy-directive-document-pool/

close
17 Jul 2019

New privacy alliance to be formed in Russia, Central and Eastern Europe

By EDRi

Civil Society advocates from Russia, and Central and Eastern Europe have joined forces to form a new inter-regional NGO to promote privacy in countries bordering the EU.

The initiative also involves activists from the Post-Soviet countries, the Balkans and the EU Accession candidate countries. One of its primary objectives is to build coalitions and campaigns in countries that have weak or non-existing privacy protections. The project emerged from a three-day regional privacy workshop held earlier in 2019 at the Nordic Non-violence Study Group (NORNONS) centre in Sweden. The workshop agreed that public awareness of privacy in the countries represented was at a dangerously poor level, and concluded that better collaboration between advocates is one solution.

There has been a pressing need for such an alliance for many years. A vast arc of countries from Russia through Western Asia and into the Balkans has been largely overlooked by international NGOs and intergovernmental organisations (IGOs) concerned with privacy and surveillance.

The initiative was convened by Simon Davies, founder of EDRi member Privacy International and the Big Brother Awards. He warned that government surveillance and abuse of personal information has become endemic in many of those countries:

“There is an urgency to our project. The citizens of places like Azerbaijan, Kazakhstan, Kyrgyzstan, Turkmenistan, and Armenia are exposed to wholesale privacy invasion, and we have little knowledge of what’s going on there. Many of these countries have no visibility in international networks. Most have little genuine civil society, and their governments engage in rampant surveillance. Where there is privacy law, it is usually an illusion. This situation applies even in Russia.”

A Working Group has been formed involving advocates from Russia, Serbia, Georgia, Ukraine and Belarus, and its membership includes Danilo Krivokapić from EDRi member SHARE foundation in Serbia. The role of this group is to steer the legal foundation of the initiative and to approve a formal Constitution.

The initiative’s Moderator is the former Ombudsman of Georgia, Ucha Nanuashvili. He too believes that the new NGO will fill a desperately needed void in privacy activism:

“In my view, regions outside the EU need this initiative. Privacy is an issue that is becoming more prominent, and yet there is very little regional collaboration and representation. Particularly in the former Soviet states there’s an urgent need for an initiative that brings together advocates and experts in a strong alliance.”

Seed funding for the project has been provided by the Public Voice Fund of the Electronic Privacy Information Center (EPIC). EPIC’s president, Marc Rotenberg, welcomed the initiative and said he believed it would “contribute substantially” to the global privacy movement:

“We have been aware for some time that there is a dangerous void around privacy protection in those regions. We appreciate the good work of NGOs and academics to undertake this important collaboration.”

The Working Group hopes to formally launch the NGO in October in Albania. The group is presently considering several options for a name. Anyone interested in supporting the work of the initiative or wanting more information can contact Simon Davies at simon <at> privacysurgeon <dot> org.

The Nordic Nonviolence Study Group
https://www.nornons.org/

SHARE Foundation
https://www.sharefoundation.info/en/

EPIC’s Public Voice fund
https://epic.org/epic/publicvoicefund/

Mass surveillance in Russia
https://en.wikipedia.org/wiki/Mass_surveillance_in_Russia

Ucha Nanuashvili, Georgian Human Rights Centre
http://www.hridc.org/

close
17 Jul 2019

Microsoft Office 365 banned from German schools over privacy concerns

By Jan Penfrat

In a bombshell decision, the Data Protection Authority (DPA) of the German Land of Hesse has ruled that schools are banned from using Microsoft’s cloud office product “Office 365”. According to the decision, the platform’s standard settings expose personal information about school pupils and teachers “to possible access by US officials” and are thus incompatible with European and local data protection laws.

The ruling is the result of several years of domestic debate about whether German schools and other state institutions should be using Microsoft software at all, reports ZDNet. In 2018, investigators in the Netherlands discovered that the data collected by Microsoft “could include anything from standard software diagnostics to user content from inside applications, such as sentences from documents and email subject lines.” All of which contravenes the General Data Protection Regulation (GDPR) and potentially local laws for the protection of personal data of underaged pupils.

While Microsoft’s “Office 365” is not a new product, the company has recently changed its offer in Germany: Until now, it provided customers with a special German cloud version hosted on servers run by German telecoms giant Deutsche Telekom. Deutsche Telekom served as a kind of infrastructure trustee, putting customer data outside the legal reach of US law enforcement and intelligence agencies. In 2018, however, Microsoft announced that in 2019 this special arrangement will be terminated and German customers are offered to move to Microsoft’s standard cloud offer in the EU.

Microsoft insists that nothing changes for customers because the new “Office 365” servers are also located in the EU or even in Germany. However, legal developments in the US have put the Hesse DPA on high alert: The newly enacted “US Cloud Act” empowers US government agencies to request access to customer data from all US-based companies no matter where their servers are located.

To make things even worse, Germany’s Federal Office for Information Security (BSI) recently expressed concerns about telemetry data that the Windows 10 operating system collects and transmits to Microsoft. So even if German (or European) schools stopped using the company’s cloud office, its ubiquitous Windows operating system also leaks data to the US with no control or stopping it for users.

School pupils are usually not able to give consent, Max Schrems from EDRi member noyb told ZDNet. “And if data is sent to Microsoft in the US, it is subject to US mass surveillance laws. This is illegal under EU law.” Even if that was legal, says the Hesse DPA, schools and other public institutions in Germany have a “particular responsibility for what they do with personal data, and how transparent they are about that.”

It seems that fulfilling those responsibilities hasn’t been possible when using Microsoft Office 365. In a next step, it is crucial that European DPAs discuss those findings within the European Data Protection Board to come to an EU-wide rule that protects children’s personal data from unregulated access by US agencies. Otherwise European schools would be well-advised to switch to privacy-friendly alternatives such as Linux, LibreOffice, and Nextcloud.

Statement of the Commissioner for Data Protection and Freedom of Information of the Land of Hesse regarding the use of Microsoft Office 365 in schools in Hesse (only in German, 09.07.2019)
https://datenschutz.hessen.de/pressemitteilungen/stellungnahme-des-hessischen-beauftragten-f%C3%BCr-datenschutz-und

Microsoft Office 365: Banned in German schools over privacy fears (12.07.2019)
https://www.zdnet.com/article/microsoft-office-365-banned-in-german-schools-over-privacy-fears

Microsoft offers cloud services in new German data centers as of 2019 in reaction to changes in demand (only in German, 31.08.2018)
https://news.microsoft.com/de-de/microsoft-cloud-2019-rechenzentren-deutschland/

(Contribution by Jan Penfrat, EDRi)

close
04 Jul 2019

Real Time Bidding: The auction for your attention

By Andreea Belu

The digitalisation of marketing has introduced novel industry practices and business models. Some of these new systems have developed into crucial threats to people’s freedoms. A particularly alarming one is Real Time Bidding (RTB).

When you visit a website, you often encounter content published by the website’s owner/author, and external ads. Since a certain type of content attracts a certain audience, the website owner can sell some space on their website to advertisers that want to reach those readers.

In the earlier years of the web, ads used to be contextual, and the website would sell its ad space to a certain advertiser in the field. For example, ads on a website about cars would typically relate to cars. Later, ads have become more personalised, and they now focus on the unique website reader. They have become “programmatic advertising”. The website still sells its space, but now it sells it to advertisement platforms, “ad exchanges”. Ad exchanges are digital marketplaces that connect publishers (like websites) to advertisers by auctioning the attention you give that website. This automated auction process is called Real Time Bidding (RTB).

How does Real Time Bidding work?

Imagine auctions, stock exchange, traders, big screens, noise, graphs, percentages. Similarly, RTB systems facilitate the auction of website ad space to the highest bidding advertiser. How does it work?

A website rents its advertising space to one (or many) ad exchanges. In the blink of an eye, the ad exchange creates a “bid request” that can include information from the website: what you’re reading, watching or listening to on the website you are on, the categories into which that content goes, your unique pseudonymous ID, your profile’s ID from the ad buyer’s system, your location, device type (smartphone or laptop), operating system, browser, IP address, and so on.

From their side, advertisers inform the ad exchange about who they want to reach. Sometimes they provide detailed customer segments. These categories have been obtained by combining the advertisers’ data about (potential) customers, and the personal profiles generated by data brokers such as Cambridge Analytica, Experian, Acxiom or Oracle. The ad exchange has now a complex profile of you, made of information from the website supplying the ad space, and information from the advertiser demanding the ad space. When there is a match between a bid request and the advertiser’s desired customer segment, a Demand Side Platform (DSP) acting on behalf of thousands of advertisers starts placing bids for the website’s ad space. The highest bid wins, places its ad in front of a particular website viewer, and the rest is history.

Click to watch the animation

TL:DR

Every time you visit a website that uses RTB, your personal data is publicly broadcasted to possibly thousands of companies ready to target their ads. Whenever this happens, you have no control over who has access to your personal data. Whenever this happens, you have no way of objecting to being traded. Whenever this happens, you cannot oppose to being targeted as Jew hater, incest or abuse victim, impotent, or right wing extremist. Whenever this happens, you have no idea whether you are being discriminated.

Whenever this happens, you have no idea where your data flows.

EDRi’s members suing against RTB

Real time bidding poses immense risks for our human rights in the digital space, specifically for the rights recognised in the EU General Data Protection Regulation (GDPR). More, it puts you at high risks of being discriminated. For these reasons, several EDRi members and observers have taken action and filed lawsuits against RTB in different EU countries. Privacy International, Panoptykon Foundation, Open Rights Group, Bits of Freedom, Digitale Gesellschaft, digitalcourage, La Quadrature du Net and Coalizione Italiana per le Libertà e i Diritti civili are taking part in a wider campaign that urges the ad tech industry to #StopSpyingOnUs.

Support their effort in fighting for your rights and spread the word!

Read More:

Privacy International full timeline of complaints
https://privacyinternational.org/adtech-complaints-timeline

GDPR Today: Ad Tech GDPR complaint is extended to four more European regulators
https://www.gdprtoday.org/ad-tech-gdpr-complaint-is-extended-to-four-more-european-regulators/

Prevent the Online Ad Industry from Misusing Your Data – Join the #StopSpyingOnUs Campaign
https://www.liberties.eu/en/campaigns/stop-spying-on-us-fix-ad-tech-campaign/307

The Adtech Crisis and Disinformation – Dr Johnny Ryan
https://vimeo.com/317245633

Blogpost series: Your privacy, security and freedom online are in danger (14.09.2016)
https://edri.org/privacy-security-freedom/

close
22 May 2019

Why should we vote in the EU elections?

By EDRi

What are your plans for the coming days? We have a suggestion: The European elections will take place – and it’s absolutely crucial to go and vote!

In the past, the EU has often defended our digital rights and freedoms. This was possible because the Members of the European Parliament (MEPs) – who we, the EU citizens, elected to represent us in the EU decision-making – are open to hearing our concerns.

So, what exactly has the EU done for our digital rights?

Privacy

The EU has possibly the best protection for citizens’ personal data: the General Data Protection Regulation (GDPR). This law was adopted thanks to some very dedicated European parliamentarians, and it enhances everyone’s rights, regardless of nationality, gender, economic status and so on. Since the GDPR came into effect, we now have for example the right to access our personal data a company or an organisation holds on us, the right to explanation and human intervention regarding automated decisions, and the right to object to profiling measures.

You can read more about your rights under the GDPR here: https://edri.org/a-guide-individuals-rights-under-gdpr/

Net neutrality

Europe has become a global standard-setter in the defence of the open, competitive and neutral internet. After a very long battle, and with the support of half a million people that responded to a public consultation, the principles that make the internet an open platform for change, freedom, and prosperity are upheld in the EU.

In June 2015, negotiations between the three European Union institutions led to new rules to safeguard net neutrality – the principle according to which everyone can communicate with everyone on the internet without discrimination. This principle was put at risk by the ambiguous, unbalanced EU Commission proposal, which would have undermined the way in which the internet functions. In 2016, the Body of European Regulators for Electronic Communications (BEREC) was tasked with publishing guidelines to provide a common approach to implementing the Regulation in the EU Member States. In June 2016, BEREC published the draft guidelines that confirm strong protections for net neutrality and open internet.

ACTA

In 2012, the MEPs voted against an international trade agreement called the Anti-Counterfeiting Trade Agreement (ACTA), which, if concluded, would have likely resulted in online censorship. It would have had major implications for freedom of expression, access to culture and privacy, it will harm international trade and stifle innovation. Therefore, people decided to demonstrate and there were protests against this draft agreement in over 200 European cities calling for a rejection. In the end, the Parliament listened to the concerns of the people and voted against ACTA.

Protecting whisteblowers

Whistleblowers fight for transparency, democracy and the rule of law, reporting unlawful or improper conduct that undermine the public interest and our rights and freedoms. In 2017, the European Parliament called on legislation to protect whistleblowers, making a clear statement recognising the essential role of whistleblowers in our society. This Resolution started the process of putting into place effective protections for whistleblowers throughout the EU. In April 2019, the Parliament adopted the new Directive, which is still to be approved by the EU Council.

Your vote matters for digital rights

In many occasions, the EU Parliamentarians have stood for our rights and freedoms. It’s important that also the new EU Parliament will be a strong defender of our digital rights – because there are so many important fights coming up.

The European elections are one of the rare occasions where we can take our future and the future of Europe into our own hands. Your vote matters. Please go and vote for digital rights on 23-27 May!

You can find more information about the elections online, for example at https://www.european-elections.eu, https://www.thistimeimvoting.eu/ and https://www.howtovote.eu/.

close
13 Mar 2019

The art of dodging questions – Facebook’s privacy policies

By Chloé Berthélémy

Remember in April 2018, after the Cambridge Analytica scandal broke, we sent a series of 13 questions to Facebook about their users’ data exploitation policy. Months later, Facebook got back to us with answers. Here is a critical analysis of their response.

Recognising people’s face without biometric data?

The first questions (1a and 1b) related to Facebook’s new facial recognition feature which scans every image uploaded to search for faces and compare them to those already in their database in order to identify users. Facebook claims that the identification process only works for users that explicitly consented to have the feature enabled and that the initial detection stage, during which the photograph is being analysed, does not involve the processing of biometric data. Biometric data is data used to identify a person through unique characteristics like fingerprints or facial features.

There are two issues here. First, contrary to what Facebook declared, the first batch of users for whom face recognition was activated received a notice, but were not asked for consent. All users were opted in by default, and only a visit to the settings page allowed them to say “no”. For the second batch of users, Facebook apparently decided to automatically opt-in only those accounts that had the photo tag suggestion feature activated, simply assuming that they wanted face recognition, too. Obviously, this does not constitute explicit consent under the General Data Protection Regulation (GDPR).

Second, even if Facebook does not manage to identify users who disabled the feature or people who are not users, their photos might still be uploaded and their faces scanned. No technology can determine whether an image contains only users who gave consent, without actually scanning every uploaded photo to search for facial features.

Facebook has been presenting this new feature as an empowerment tool for users to control which pictures of them are being uploaded on the platform, to protect privacy and to prevent identity theft. However, EU officials and digital rights advocates denounced this communication practice as manipulating user consent by promoting facial recognition as an identity protection tool.

Privacy settings by default

One of our questions related to the initial settings every Facebook user has when creating an account and their protection levels by default (question 3). Facebook responded that it has suspended the search for people by phone number in the Facebook search bar. Since Facebook responded to our questions in August 2018, it seems that it reinstated this function, set on “Everyone can look you up using your phone number” by default (see below Belgian account settings consulted lastly on 24 January 2019).

This reinstatement is probably linked to the upcoming merging between Facebook-owned messaging systems: Facebook Messenger, WhatsApp and Instagram messaging. Identification requirements for each messaging applications are different: a Facebook account for Messenger, a phone number for WhatsApp and an email for Instagram. The merging gives Facebook the possibility to intersect information and to connect several profiles under a single, unified identity. What is worse, Facebook now reportedly makes searchable phone numbers that users had provided for two-factor authentication, and there is no way to switch this feature off.

Other default privacy settings on Facebook are not protective either. The access to a user’s friend list is set to “publicly visible”, for example. Facebook justified the low privacy level by repeating that users join Facebook to connect with others. Nonetheless, even if users want to limit who can see their friend lists, people can see their Facebook friendships by looking at the publicly accessible friends lists of their friends. Some personal information will simply never be fully private under Facebook’s current privacy policies.

The Cambridge Analytica case

Facebook pleaded the misuse of its services and shifted the entire responsibility of the Cambridge Analytica scandal on the quiz application “This Is Your Digital Life” (our questions 4 and 5). The app requested permission from users to access their personal messages and newsfeed. According to Facebook, there was no unauthorised access to data as the consent was freely given by users. However, accessing one user’s newsfeed and personal messages also meant that the application could access received posts and messages, that is to say from users who did not consent. Once again, individual privacy is highly dependent on others’ carefulness. Facebook admitted that it wished it had notified earlier affected users who did not give consent. To our question why the appropriate national authorities were not notified of the incident immediately, Facebook gave no answer.

“This Is Your Digital Life” is just one application, but there may be many more that harvest similar amounts of personal data without the consent from users. Facebook assured that it made it harder for third parties to misuse its systems. Nevertheless, the limits to the processing of collected data by third parties remain unclear, and we received no answer about the current possibilities for other applications to share and receive users messages.

Facebook’s ad targeting practices

“Advertising is central not only to our ability to operate Facebook, but to the core service that we provide, so we do not offer the ability to disable advertising altogether.” If advertisement is non-negotiable (our question 9), Facebook explained that through its new Ad Preferences tool (our question 6) users can nevertheless decide whether or not they want to see ads that are targeted at them based on their interests and personal data. The Ad Preferences tool gives users control over the criteria used for targeted advertisement: data provided by the user, data collected from Facebook partners, and data based on the user’s activity on Facebook products. Users can also hide advertisement topics and disable advertisers with whom they interacted.

But if Facebook was treating ads settings the same way as privacy settings, as it claims to do, the default settings for a new user would look very different: For this article we created a new Facebook account and found that Facebook does not guide new users through the opt-in and opt-out options for privacy and ad settings. On the contrary, Facebook’s default ad settings involve the profiling of new users based on their relationship status, job title, employer and education (see new account settings below). Those defaults are clearly incompatible with the GDPR’s “privacy by default” requirement.

Ads are also based on the activity on Facebook products, present on “websites, apps and devices that use [Facebook’s] advertising services”. This includes everything from social media plugins such as “Like” or “Share” buttons to Facebook Messenger, Instagram or even Whatsapp, which has stand-alone terms of service and privacy policy. If a third party website uses Facebook Analytics, traces left by the user on that third-party website will be used as well. Since Facebook is acquiring more and more applications, the list goes on and on. “Data from different apps can paint a fine-grained and intimate picture of people’s activities, interests, behaviours and routines, some of which can reveal special category data, including information about people’s health or religion.”

In the same vein, EDRi member Privacy International found that Facebook collects personal information on people who are logged out of Facebook or don’t even have a Facebook account. The social media company owns so many apps, “business tools” and services that it is capable of tracking users, non-users and logged-out users across the internet. Facebook doesn’t seem to be willing to change its business practices to respect people’s privacy. Privacy is not about what Facebook users can see from each other but what information is accessed and used by third parties and for which purposes without the users’ knowledge or consent.

Profiling and automated decision-making

Article 22 of the GDPR introduces a right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or “similarly significant” effects for the user. We asked Facebook what measures it takes to make sure its ad targeting practices, notably for political ads, are compliant with this provision (question 7). In its answer, Facebook considers that its targeted ads based on automated decision-making do not have legal or similarly significant effects yet. In light of the numerous scandals the company has been facing around the manipulation of the 2016 U.S. elections and the Brexit referendum, this answer is quite surprising. Even though many would argue that the way Facebook targets voters with ads based on automated decision-making has indeed “similarly significant”, if not legal effects for its users and societies as a whole. But Unfortunately, Facebook doesn’t seem to consider it should change its ad targeting practices.

Special categories of data

Article 9 of the GDPR defines special categories of particularly sensitive data that include racial or ethnic origin, political opinions, religious beliefs, health, sexual orientation and biometric data. Facebook says that without the user’s explicit consent to use such special categories of data, they will be deleted from respective profiles and Facebook’s servers (our question 2.a).

What Facebook doesn’t say, is that users don’t even need to share this information in order for the platform to monetise it. Facebook can simply deduce religious views, political opinions and health data from based on which third-party websites they visit, what they write in Facebook posts, what they comment on and share: Facebook does not need users to fill in their profile fields when it can infer extremely sensitive information from all other data users generate on the platform day in day out. Facebook can then assign different ad preferences (such as “LGBT community”, “Socialist Party”, “Eastern Orthodox Church”) based on each user’s online activities, without asking for consent at all, and exploit it for advertising purposes. Researchers argue the practice of labelling Facebook users with ad preferences associated with special categories of personal data may be in breach of Article 9 of the GDPR because no other legal basis than explicit consent could allow this form of use. In its reply to our questions, Facebook voluntarily omitted their use of sensitive data derived from user behaviour, posts, comments, likes and so on to feed its marketing profiles. It is too easy to focus on the tip of the iceberg.

Right to access

Replying to our request on the right to access, download, erase or modify personal data, Facebook described its three main tools, Download Your Information (DYI), Access Your Data (AYD) and Clear History (our question 8). According to Facebook, DYI provides the user with all the data each user provided on the platform. But as explained above, this does not include information inferred by the platform based on user behaviour, posts, comments, likes and so on, nor information provided by friends or other users, such as tags in photos or posts.

Lastly, Facebook confirmed that it was not using smartphone microphones to inform ads (our question 12). This might even be true, because Facebook has already a lot of surveillance tools at hand to gather enough information about users to produce disconcerting advertisements.

Questions left without answers

  1. What was the cut-off date before Facebook started deleting information users added to their profile and did not give explicit consent for their processing?
  2. Will Facebook offer a single place where people who have no Facebook account can control every privacy aspect of Facebook?
  3. If Facebook apps were to use smartphone microphones in any way, would you consider that lawful?
  4. You claim to offer a way for users to download their data with one click. Can you confirm that the downloaded files contain all the data that Facebook holds on each user?

Written Responses to EDRi Questions (22.06.2018)
https://edri.org/files/edri_responses_facebook_20180622.pdf

Privacy International’s study on ‘How Apps on Android Share Data with Facebook – Report’ (29.12.2018) https://privacyinternational.org/report/2647/how-apps-android-share-data-facebook-report

Facebook Use of Sensitive Data for Advertising in Europe
https://arxiv.org/pdf/1802.05030.pdf

Facebook Doesn’t Need To Listen Through Your Microphone To Serve You Creepy Ads (13.04.2018)
https://www.eff.org/fr/deeplinks/2018/04/facebook-doesnt-need-listen-through-your-microphone-serve-you-creepy-ads

(Contribution by Chloé Berthélémy, EDRi)

EDRi-gram_subscribe_banner
Twitter_tweet_and_follow_banner
close