freedom of expression

Freedom of expression is one of the key benefits of the digial era. The global information society permits interaction on a scale that was previously unheard of – promoting intercultural exchange and democracy. Consequently, the protection of this freedom is central to much of EDRi’s work.

10 Oct 2019

Open letter to EU Member States: Deliver ePrivacy now!


On 11 October 2019, EDRi, together with four other civil society organisations, sent an open letter to EU Member States, to urge to conclude the negotiations on the ePrivacy Regulation. The letter highlights the urgent need for a strong ePrivacy Regulation in order to tackle the problems created by the commercial surveillance business models, and expresses the deep concerns by the fact that the Member States, represented in the Council of the European Union, still have not made decisive progress, more than two and a half years since the Commission presented the proposal.

You can read the letter here (pdf) and below:

Open letter to EU Member States

Dear Minister,

We, the undersigned organisations, urge you to swiftly reach an agreement in the Council of the European Union on the draft ePrivacy Regulation.

We are deeply concerned by the fact that, more than two and a half years since the Commission presented the proposal, the Council still has not made decisive progress. Meanwhile, one after another, privacy scandals are hitting the front pages, from issues around the exploitation of data in the political context, such as “Cambridge Analytica”, to the sharing of sensitive health data. In 2019, for example, an EDRi/CookieBot report demonstrated how EU governments unknowingly allow the ad tech industry to monitor citizens across public sector websites.1 An investigation by Privacy International revealed how popular websites about depression in France, Germany and the UK share user data with advertisers, data brokers and large tech companies, while some depression test websites leak answers and test results to third parties.2

A strong ePrivacy Regulation is necessary to tackle the problems created by the commercial surveillance business models. Those business models, which are built on tracking and cashing in on people’s most intimate moments, have taken over the internet and create incentives to promote disinformation, manipulation and illegal content.

What Europe gains with a strong ePrivacy Regulation

The reform of the current ePrivacy Directive is essential to strengthen – not weaken – individuals’ fundamental rights to privacy and confidentiality of communications.3 It is necessary to make current rules fit for the digital age.4 In addition, a strong and clear ePrivacy Regulation would push Europe’s global leadership in the creation of a healthy digital environment, providing strong protections for citizens, their fundamental rights and our societal values. All this is key for the EU to regain its digital sovereignty, one of the goals set out by Commission President-elect Ursula von der Leyen in her political guidelines.5

Far from being an obstacle to the development of new technologies and services, the ePrivacy Regulation is necessary to ensure a level playing field and legal certainty for market operators.6 It is an opportunity for businesses7 to innovate and invest in new, privacy-friendly, business models.

What Europe loses without a strong ePrivacy Regulation

Without the ePrivacy Regulation, Europe will continue living with an outdated Directive which is not being properly enforced8 and the completion of our legal framework initiated with the General Data Protection Regulation (GDPR) will not be achieved. Without a strong Regulation, surveillance-driven business models will be able to cement their dominant positions9 and continue posing serious risks to our democratic processes.10 11 The EU also risks losing the position as global standard-setter and digital champion that it earned though the adoption of the GDPR.

As a result, people’s trust in internet services will continue to fall. According to the Special Eurobarometer Survey of June 2019 the majority of users believe that they only have partial control over the information they provide online, with 62% of them being concerned about it.

The ePrivacy Regulation is urgently needed

We expect the EU to protect people’s fundamental rights and interests against practices that undermine the security and confidentiality of their online communications and intrude in their private lives.

As you meet today to discuss the next steps of the reform, we urge you to finally reach an agreement to conclude the negotiations and deliver an upgraded and improved ePrivacy Regulation for individuals and businesses. We stand ready to support your work.

Yours sincerely,

The European Consumer Organisation (BEUC)
European Digital Rights (EDRi)
Privacy International
Open Society European Policy Institute (OSEPI)


Read more:

Open letter to EU Member States on ePrivacy (11.10.2019)

Right a wrong: ePrivacy now! (09.10.2019)

Civil society calls Council to adopt ePrivacy now (05.12.2018)

ePrivacy reform: Open letter to EU member states (27.03.2018)

09 Oct 2019

Right a wrong: ePrivacy now!

By Ella Jakubowska

When the European Commission proposed to replace the outdated and improperly enforced 2002 ePrivacy Directive with a new ePrivacy Regulation in January 2017, it marked a cautiously hopeful moment for digital rights advocates across Europe. With the backdrop of the General Data Protection Regulation (GDPR), adopted in May 2018, Europe took a giant leap ahead for the protection of personal data. Yet by failing to adopt the only piece of legislation protecting the right to privacy and to the confidentiality of communications, the Council of the European Union seems to have prioritised private interests over the fundamental rights, securities and freedoms of citizens that would be protected by a strong ePrivacy Regulation.

This is not an abstract problem; commercial surveillance models – where businesses exploit user data as a key part of their business activity – pose a serious threat to our freedom to express ourselves without fear. This model relies on profiling, essentially putting people into the boxes in which the platforms believe they belong – which is a very slippery slope towards discrimination. And when children increasingly make up a large proportion of internet users, the risks become even more stark: their online actions could impact their access to opportunities in the future. Furthermore, these models are set up to profit from the mass sharing of content, and so platforms are perversely incentivised to promote sensationalist posts that could harm democracy (for example political disinformation).

The rise of highly personalised adverts (”microtargeting”) means that online platforms increasingly control and limit the parameters of the world that you see online, based on their biased and potentially discriminatory assumptions about who you are. And as for that online quiz about depression that you took? Well, that might not be as private as you thought.

It is high time that the Council of the European Union takes note of the risks to citizens caused by the current black hole where ePrivacy legislation should be. Amongst the doom and gloom, there are reasons to be optimistic. If delivered in its strongest form, an improved ePrivacy Regulation helps to complement the GDPR; will ensure compliance with essential principles such as privacy by design and by default; will tackle the perversive model of online tracking and the disinformation it creates; and it will give power back to citizens over their private life and interests. We urge the Council to swiftly update and adopt a strong, citizen-centered ePrivacy Regulation.

e-Privacy revision: Document pool: Document pool

ePrivacy: Private data retention through the back door (22.05.2019)

Captured states – e-Privacy Regulation victim of a “lobby onslaught” (23.05.2019)

NGOs urge Austrian Council Presidency to finalise e-Privacy reform (07.11.2018)

e-Privacy: What happened and what happens next (29.11.2017)

(Contribution by Ella Jakubowska, EDRi intern)

09 Oct 2019

Why weak encryption is everybody’s problem

By Ella Jakubowska

Representatives of the UK Home Department, US Attorney General, US Homeland Security and Australian Home Affairs have joined forces to issue an open letter to Mark Zuckerberg. In their letter of 4 October, they urge Facebook to halt plans for end-to-end (aka strong) encryption across Facebook’s messaging platforms, unless such plans include “a means for lawful access to the content of communications”. In other words, the signatories are requesting what security experts call a “backdoor” for law enforcement to circumvent legitimate encryption methods in order to access private communications.

The myth of weak encryption as safe

Whilst the US, UK and Australia are adamant that their position enhances the safety of citizens, there are many reasons to be skeptical of this. The open letter uses emotive language to emphasise the risk of “child sexual exploitation, terrorism and extortion” that the signatories claim is associated with strong encryption, but fails to give a balanced assessment which includes the risks to privacy, democracy and most business transactions of weak encryption. By positioning weak encryption as a “safety” measure, the US, UK and Australia imply (or even explicitly state) that supporters of strong encryption are supporting crime.

Government-led attacks on everybody’s digital safety aren’t new. Since the 1990s, the US has tried to prevent the export of strong encryption and—when that failed—worked on forcing software companies to build backdoors for the government. Those attempts were called the first “Cryptowars”.

In reality, however, arguing that encryption mostly helps criminals is like saying that vehicles should be banned and all knives blunt because both have been used by criminals and terrorists. Such reasoning ignores that in the huge majority of cases strong encryption greatly enhances people’s safety. From enabling secure online banking, to keeping citizens’ messages private, internet users and companies rely on strong encryption every single day. It is the foundation of trusted, secure digital infrastructure. Weak encryption, on the other hand, is like locking the front door of your home, only to leave the back one open. Police may be able to enter more easily – but so too can criminals.

Strong encryption is vital for protecting civil rights

The position outlined by the US, UK and Australia is fundamentally misleading. Undermining encryption harms innocent citizens. Encryption already protects some of the most vulnerable people worldwide – journalists, environmental activists, human rights defenders, and many more. State interception of private communications is frequently not benign: government hacking can and does lead to egregious violations of fundamental rights.

For many digital rights groups, this debate is the ultimate groundhog day, and valuable effort is expended year after year on challenging the false dichotomy of “privacy versus security”. Even the European Commission has struggled to sort fact from fear-mongering.

However, it is worth remembering that Facebook’s announcement to encrypt some user content is so far just that: an announcement. The advertisement company’s approach to privacy is a supreme example of surveillance capitalism: protecting some users when it is favourable for their PR, and exploiting user data when there is a financial incentive to do so. To best protect citizens’ rights, we need a concerted effort between policy-makers and civil society to enact laws and build better technology so that neither our governments nor social media platforms can exploit us and our personal data.

The bottom line

Facebook must refuse to build anything that could constitute a backdoor into their messaging platforms. Otherwise, Facebook is handing the US, UK and Australian governments a surveillance-shaped skeleton key that puts Facebook users at risk worldwide. And once that door is unlocked, there will be no way to control who will enter.

EDRi Position paper on encryption: High-grade encryption is essential for our economy and our democratic freedoms (25.01.2015)

Encryption – debunking the myths (03.05.2017)

Encryption Workarounds: a digital rights perspective (12.09.2017)

(Contribution by Ella Jakubowska, EDRi intern)

09 Oct 2019

Content regulation – what’s the (online) harm?

By Access Now and EDRi

In recent years, the national legislators in EU Member States have been pushing for new laws to combat negative societal phenomena such as hateful or terrorist content online. These regulatory efforts have one common denominator: they shift the focus from conditional intermediary liability to holding intermediaries directly responsible for the dissemination of illegal content on their platforms.

Two prominent legislative and policy proposals of this kind that will significantly shape the European debate around the future of intermediary liability are the UK White Paper on Online Harms and the newly adopted Avia law in France.

UK experiment to fight online harm: overblocking on the horizon

In April 2019, the United Kingdom (UK) government proposed a new regulatory model including a so-called statutory duty of care, saying it wants to make platform companies more responsible for the safety of online users. The paper foresees a future regulation that holds companies accountable for a set of vaguely predefined “online harms” which includes illegal content, but also users’ behaviours that are deemed harmful but not necessarily illegal.

EDRi and Access Now have long emphasised the risk that privatised law enforcement and heavy reliance on automated content filters pose to human rights online. In this vein, multiple civil society organisations, including EDRi members (for example Article 19 and Index on Censorship), have warned against the alarming measures the British approach contains. To avoid liability, the envisaged duty of care, combined with heavy fines, create incentives for platform companies to block online content even if its illegality is doubtful. The regulatory approach proposed by the UK Online Harms White Paper will actually coerce companies into adopting content filtering measures that will ultimately result in the general monitoring of all information being shared on online platforms. Due to over-compliance with states’ demands, such conduct often amounts to illegitimate restrictions on freedom of expression or, in other words, online censorship. Moreover, a general monitoring obligation is currently prohibited by European law.

The White Paper also covers activities and content that are not illegal but potentially undesirable such as advocacy of self-harm or disinformation. This is highly problematic in regard to the human rights law criteria that guide restrictions on freedom of expression. The ill-defined and vague concept of “online harms” cannot serve as a proper legal basis to justify an interference with fundamental rights. Ultimately, the proposal falls short in providing substantial evidence that sustains its approach. It also bluntly fails to address key issues of online regulation, such as content distribution on platforms that lies in the core of companies’ business models, opacity of algorithms, violations of online privacy, and data breaches.

French Avia law: Another “quick fix” to online hate speech?

Inspired by the German Network Enforcement Act (NetzDG), France has now adopted its own piece of legislation, the so-called Avia law – named after the Rapporteur of the file, Member of the Parliament Laetitia Avia. Similarly to NetzDG, the law requires companies to remove manifestly illegal content within 24 hours from receiving a notification about it.

Following its German predecessor, the Avia law encourages companies to be overly cautious and pre-emptively remove or block content to avoid substantial fines for non-compliance. The time frame in which they are expected to take action is too short to allow for a proper assessment of each case at stake. Importantly, the French Parliament does not discard the possibility for companies to resort to automated decision-making tools in order to process the notices. Such measure in itself can be grounded in the legitimate objectives to fight against hatred, racism, LGBTQI+-phobic and other discriminatory content. However, tackling hate speech and other context-dependent content requires careful and balanced analysis. In practice, leaving the decision to private actors without adequate oversight and redress mechanisms to decide whether a piece of content meets the threshold of “manifest illegality” will be damaging for freedom of expression and the rule of law.

However, there are also positive aspects of the Avia law. It provides safeguards of the procedural fairness by establishing the requirement for individuals who notify potentially illegal content to state the reasons why they believe it should be removed. Moreover, the law sets out obligations for companies to establish internal complaints and appeal mechanisms for both the notifier and the content provider. Transparency obligations on content moderation policies are also introduced. Lastly, the regulator established by the Avia law does not focus its evaluation solely on numbers of content removed but also on scrutinising over-removal when monitoring compliance with the law.

Do not fall into the same trap!

We are currently witnessing regulatory efforts at the national and European level that seek to provide easy solutions to online phenomena such as terrorist content or hate speech, ignoring the underlying societal issues. Most of the suggested solutions rely on filters and content recognition technologies with limited ability to assess the context in which a given piece of content has been posted. Proper safeguards and requirements for meaningful transparency that should accompany these measures are often sidetracked by legislators. However, it is not only the EU and its Member States where similar trends can be observed. For instance, the Australian government recently adopted a new bill imposing criminal liability on executives of social media platforms. Section 230 of the American Communication Decency Act (CDA) may be placed under the review process triggered by a presidential executive order that significantly limits the liability protections granted to platform companies by the existing law.

Legislators around the globe have one thing in common: the urge to “eradicate” vaguely defined “online harms”. The rhetoric of danger comprised in online harm has become a driving force behind regulatory responses in liberal democracies. This is exactly the kind of logic frequently used by authoritarian regimes to restrict legitimate debate. With the upcoming Digital Services Act (DSA) potentially replacing the E-Commerce Directive in Europe, the EU has an extraordinary opportunity to become a trend-setter, establishing high standards for the protection of users’ human rights, while addressing legitimate concerns stemming from the spread of illegal online content.

For this to happen, the European Commission should propose a law that imposes workable, transparent and accountable content moderation procedures and a functioning notice and action system on platforms. Such positive examples of tackling platform regulation should be combined with forceful actions against the centralisation of power over data and information into the hands of few big tech companies. EDRi and Access Now developed specific recommendations containing human rights safeguards, which should be comprised in both content moderation exercised by companies and State regulation tackling illegal online content. The European Commission’s responsibility is to ensure fundamental rights during the process of drafting any future legislation governing intermediary liability and redefining content governance online.

For this to happen, the European Commission should propose a law that imposes workable, transparent and accountable content moderation procedures and a functioning notice and action system on platforms. Such positive examples of tackling platform regulation should be combined with forceful actions against the centralisation of power over data and information into the hands of few big tech companies. EDRi and Access Now developed specific recommendations containing human rights safeguards, which should be comprised in both content moderation exercised by companies and State regulation tackling illegal online content. The European Commission’s responsibility is to ensure fundamental rights during the process of drafting any future legislation governing intermediary liability and redefining content governance online.

Access Now

Access Now’s human rights guide on protecting freedom of expression in the era of online content moderation (13.05.2019)

E-Commerce review: Opening Pandora’s box? (20.06.2019)

French law aimed at combating hate content on the internet (09.07.2019)

UK: Online Harms Strategy must “design in” fundamental rights (10.04.2019)

UK’s Online Harms White Paper (04.2019)

(Contribution by Eliška Pírková, EDRi member Access Now, and Chloé Berthélémy, EDRi)

03 Oct 2019

CJEU ruling on fighting defamation online could open the door for upload filters


Today, on 3 October 2019, the Court of Justice of the European Union (CJEU) gave its ruling in the case C‑18/18 Glawischnig-Piesczek v Facebook. The case is related to injunctions obliging a service provider to stop the dissemination of a defamatory comment. Some aspects of the decision could pose a threat for freedom of expression, in particular that of political dissidents who may be accused of defamatory practices.

This ruling could open the door for exploitative upload filters for all online content.

said Diego Naranjo, Head of Policy at EDRi.

Despite the positive intention to protect an individual from defamatory content, this decision could lead to severed freedom of expression for all internet users, with particular risks for political critics and human rights defenders by paving the road for automated content recognition technologies.

The ruling confirms that a hosting provider such as Facebook can be ordered, in the context of an injunction, to seek and identify, among all the content shared by its users, content that is identical to the content characterised as illegal by a court. If the obligation to block future content applies to all users on a large platform like Facebook, the Court has in effect considered it to be in line with the E-Commerce Directive that courts demand automated upload filters and blurred the distinction between general and specific monitoring in its previous case law. EDRi is concerned that automated upload filters for identical content will not be able to distinguish between legal and illegal content, in particular when applied to individual words that could have very different meanings depending on the context and the intent of the user.

EDRi welcomes the Court’s attempt to find a balance of rights (namely freedom of expression, freedom to conduct a business) and to limit the impact on freedom of expression by differentiating between the search for identical and equivalent content. However, the ruling seems to be departing from previous case law regarding the ban on general monitoring obligations (for example Scarlet v. Sabam). Imposing filtering of all communications in order to look for one specific piece of content, using non-transparent algorithms, is likely to unduly restrict legal speech – independently from whether they look for content that is identical or equivalent to illegal content.

The upcoming review of the E-Commerce Directive should clarify, among other things, how to deal with online content moderation. In the context of this review, it is crucial to address the problem of disinformation without unduly interfering with the fundamental right to freedom of expression for users of the platform. Specifically, the business model based on amplifying certain type of content in the detriment of other in order to attract users’ attention requires urgent scrutiny.

Read more:

No summer break for free expression in Europe: Facebook cases that matter for human rights (23.09.2019)

CJEU case C-18/18 – Glawischnig-Piesczek Press Release (03.10.2019)

CJEU case C-18/18 – Glawischnig-Piesczek ruling (03.10.2019)

Fighting defamation online – AG Opinion forgets that context matters (19.06.2019)

Dolphins in the Net, a New Stanford CIS White Paper

SABAM vs Netlog – another important ruling for fundamental rights (16.02.2012)

01 Oct 2019

CJEU on cookies: ‘Consent or be tracked’ is not an option


Today, on 1 October 2019, the Court of Justice of the European Union (CJEU) gave its ruling on “cookie consent” requirements. European Digital Rights (EDRi) welcomes the CJEU’s confirmation that under the current data protection framework, cookies can only be set if users have given consent that is valid under the General Data Protection Regulation (GDPR). This means consent needs to be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of a user’s agreement.

‘Consent or be tracked’ is not an option. The CJEU ruling spells it out for the industry and calls for clear rules on confidentiality of our communications

said Diego Naranjo, Head of Policy at EDRi.

EU Members States need to finally move forward with legislating this practice, and take the much needed ePrivacy Regulation out of the EU Council’s closet.

This ruling is a positive step towards protecting people from hidden commercial surveillance techniques deployed by the advertisement industry. It is, however, crucial to also urgently finalise the new ePrivacy Regulation that complements the GDPR in strengthening the privacy and security of electronic communications.

Read more:

CJEU press release: Storing cookies requiresinternet users’ active consent – A pre-ticked checkbox is therefore insufficient (01.10.2019)

CJEU ruling C-673/17 (01.10.2019)

Video: Cookies (05.09.2016)

EU Council considers undermining ePrivacy (30.06.2018)

Civil society calls Council to adopt ePrivacy now (05.12.2018)

Freedom to be different: How to defend yourself against tracking (27.09.2016)

e-Privacy revision: Document pool

26 Sep 2019

Mozilla Fellow Petra Molnar joins us to work on AI & discrimination

By Guest author

Starting on 1 October, Petra Molnar will join our team as a Mozilla Fellow. She is a lawyer specialising in migration, human rights, and technology, and has a Masters of Social Anthropology from York University, a Juris Doctorate from the University of Toronto, and an LL.M in International Law from the University of Cambridge. Mozilla Fellowships are organised and supported by the Mozilla Foundation and engage in a specific project in collaboration with an association, such as EDRi. With our upcoming work on artificial intelligence (AI) and our experience working on surveillance and data protection, we look forward to working with Petra, to add our voice to the ongoing discussions on the impact of algorithms on vulnerable populations, such as migrants and refugees.

Artificial intelligence and migration management from a human rights perspective

The systematic detention of migrants at the US-Mexico border. The wrongful deportation of 7 000 foreign students accused of cheating on a language test. Racist or sexist discrimination based on social media profiles. What do these examples have in common? In every case, an algorithm made a decision with serious consequences for people’s lives.

Nearly 70 million people are currently on the move due to conflict, instability, environmental factors, and economic reasons. Many states and international organisations involved in migration management are exploring machine learning to increase efficiency and support border security. These experiments range from big data predictions about population movements in the Mediterranean, to Canada’s use of automated decision-making in immigration applications, to AI lie detectors deployed at European airports. However, most of these experiments fail to account for the far-reaching impacts on human lives and human rights. These unregulated technologies are developed with little oversight, transparency, and accountability.

Expanding on my work on the human rights impacts of automated decision-making in immigration, this ethnographic project and accompanying advocacy campaign aims to create a governance mechanism for AI in migration with human rights at the centre. While embedded at EDRi, I will interview affected populations, experts, technologists, and policy makers to produce a well-researched report on the human rights impacts of migration management technologies, collaborating with academics, tech developers, the UN, governments, and civil society. This project will build on the work already done in the EU and provide feedback to EDRi’s ongoing work on AI. I will engage with NGOs to help build EDRi’s network and broaden the scope of action to non-digital groups beyond the EU, translating these efforts into a global strategy for the governance of migration management technologies.

I am delighted to be working with EDRi on this important project as the 2019-2020 Mozilla Fellow!

Mozilla Fellowships

Big Data and International Migration (16.06.2014)

Bots at the Gate – A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System (26.09.2018)

Emerging Voices: Immigration, Iris-Scanning and iBorderCTRL–The Human Rights Impacts of Technological Experiments in Migration (19.08.2019)

(Contribution, Petra Molnar, selected Mozilla Fellow, EDRi)

25 Sep 2019

PNR complaint advances to the Austrian Federal Administrative Court


On 19 August 2019, Austrian EDRi member lodged a complaint with the Austrian data protection authority (DPA) against the Passenger Name Records (PNR) Directive. After only three weeks, on 6 September, they received the response from the DPA: The complaint was rejected. That sounds negative at first, but is actually good news. The complaint can and must now be lodged with the Federal Administrative Court.

Why was the complaint rejected?

The DPA has no authority to decide whether or not laws are constitutional. Moreover, it cannot refer the matter to the Court of Justice of the European Union (CJEU), which in this case is necessary, because the complaint concerns an EU Directive. It was to be expected that the DPA would decide in this way, but the speed of the decision was somewhat surprising – in a positive way. It was clear from the outset that the data protection authority would reject the complaint, but it was a necessary step that could not be skipped, as there is no other legal route to the Federal Administrative Court than via the DPA. All seven proceedings of the complainants lodged with the aid of were merged, and the organisation was given the power of representation. This means that is allowed to represent the complainants.

What are the next steps?

Meanwhile, is still waiting for a freedom of information (FOI) request they have sent to the Passenger Information Unit (PIU) that processes the PNR data in Austria. While an answer to one request was received within a few days, another one has been overdue since 23 August. The unanswered request concerns data protection framework conditions for the PNR implementation. will file the complaint with the Federal Administrative Court within four weeks. It is to be expected that the court will submit legal questions to the Court of Justice of the European Union (CJEU).

Passenger Name Records

Passenger surveillance brought before courts in Germany and Austria (22.05.2019)

PNR: EU Court rules that draft EU/Canada air passenger data deal is unacceptable (26.07.2017)

(Contribution by Iwona Laub, EDRi member, Austria)

25 Sep 2019

Why EU passenger surveillance fails its purpose


The EU Directive imposing the collection of flyers’ information (Passenger Name Record, PNR) was adopted in April 2016, the same day as the General Data Protection Regulation (GDPR). The collection of PNR data from all flights going in and out of Brussels has a strong impact on the right of privacy of individuals and it needs to be justified on the basis of necessity and proportionality, and only if it meets objectives of general interest. All of this lacks in the current EU PNR Directive, which is at the moment being implemented in the EU.

The Austrian implementation of the PNR Directive

In Austria, the Austrian Passenger Information Unit (PIU) has processed PNR since March 2019. On 9 July 2019, the Passenger Data central office (Fluggastdatenzentralstelle) issued a response to inquiries into PNR implementation in Austria. According to the document, from February 2019 to 14 May, 7 633 867 records had been transmitted to the PIU. On average, about 490 hits per day are reported, with an average of about 3 430 hits per week requiring further verification. According to the document, out of the 7 633 867 reported records, there were 51 confirmed matches and in 30 cases there was the intervention by staff at the airport concerned.

Impact on innocents

What this small show of success does not capture, however, is the damage inflicted on the thousands of innocent passengers who are wrongly flagged by the system and who can be subjected to damaging police investigations or denied entry into destination countries without proper cause. Mass surveillance that seeks a small, select population is invasive, inefficient, and counter to fundamental rights. It subjects the majority of people to extreme security measures that are not only ineffective at catching terrorists and criminals, but that undermine privacy rights and can cause immense personal damage.

Why is this happening? The rate fallacy

Imagine a city with a population of 1 000 000 people implements surveillance measures to catch terrorists. This particular surveillance system has a failure rate of 1%, meaning that (1) when a terrorist is detected, the system will register it as a hit 99% of the time, and fail to do so 1% of the time and (2) that when a non-terrorist is detected, the system will not flag them 99% of the time, but register the person as a hit 1% of the time. What is the probability that a person flagged by this system is actually a terrorist?

At first, it might look like there is a 99% chance of that person being a terrorist. Given the system’s failure rate of 1%, this prediction seems to make sense. However, this is an example of incorrect intuitive reasoning because it fails to take into account the error rate of hit detection.

This is based on the rate fallacy: The base rate fallacy is the tendency to ignore base rates – actual probabilities – in the presence of specific, individuating information. Rather than integrating general information and statistics with information about an individual case, the mind tends to ignore the former and focus on the latter. One type of base rate fallacy is the one we suggested above called the false positive paradox, in which false positive tests are more probable than true positive tests. This result occurs when the population overall has a low incidence of a given condition and the true incidence rate of the condition is lower than the false positive rate. Deconstructing the false positive paradox shows that the true chance of this person being a terrorist is closer to 1% than to 99%.

In our example, out of one million inhabitants, there would be 999 900 law-abiding citizens and 100 terrorists. The number of true positives registered by the city’s surveillance numbers 99, with the number of false positives at 9 999 – a number that would overwhelm even the best system. In all, 10 098 people total – 9 999 non-terrorists and 99 actual terrorists – will trigger the system. This means that, due to the high number of false positives, the probability that the system registers a terrorist is not 99% but rather is below 1%. Searching in large data sets for few suspects means that only a small number of hits will ever be genuine. This is a persistent mathematical problem that cannot be avoided, even with improved accuracy.

Security and privacy are not incompatible – rather there is a necessary balance that must be determined by a society. The PNR system, by relying on faulty mathematical assumptions, ensures that neither security nor privacy are protected.

PNR – Passenger Name Record

Passenger surveillance brought before courts in Germany and Austria (22.05.2019)

We’re going to overturn the PNR directive (14.05.2019)

NoPNR – We are taking legal action against the mass processing of passenger data!

An Explainer on the Base Rate Fallacy and PNR (22.07.2019)

(Contribution by Kaitlin McDermott, EDRi-member, Austria)

25 Sep 2019

Facebook users blocked simply for mentioning a name?

By Dean Willis

Merely writing or including two words, in this case “Tommy Robinson”, in a Facebook post or link is enough to get the post removed and the writer blocked. At least it seems so in Denmark and Sweden.

Writing the name of the English right-wing activist infringes and violates Facebook’s Community Rules, a particular category aimed at so-called hate preachers including “individuals or organizations that organize or incite violence”. Other Facebook users are not allowed to support, praise or provide representation of the banned hate preachers. According to statements by Facebook’s Nordic Head of Communications, criticism of Tommy Robinson is allowed, but merely mentioning his name in a neutral context will be considered as support of the banned hate preacher.

For example, a Facebook blog post by a member of Danish right-wing political party the New Right, in which he complained that he runs the risk of being blocked by various social media if he writes the name, was removed. A Danish public broadcaster had an interview with a Facebook’s Head of Communications about the platform’s moderation policy to which they linked from their Facebook page, and it was initially taken down because it mentioned Tommy Robinson. Facebook users in Denmark and Sweden are also reporting that posts mentioning Robinson were taken down within minutes from publishing them.

A blogger from a left-wing political party was banned from Facebook for 24 hours for writing that Tommy Robinson was an “idiot” in a blogpost that also criticised Facebook’s excessive moderation policies. This suggests the removals are automated, without consideration of context, contrary to the claims by Facebook than only support and representation of hate preachers is banned, which raises a question about restrictions on freedom of expression and how we discuss and debate online.

If you write this name, you will be blocked on social media (only in Danish, 17.09.2019)

Danish public broadcaster’s interview with Facebook taken down (only in Danish, 23.09.2019)

E-Commerce review: Opening Pandora’s box? (20.06.2019)

(Contribution by Dean Willis, EDRi intern)