08 Jul 2020

Europol: Non-accountable cooperation with IT companies could go further

By Chloé Berthélémy

There is an ongoing mantra among law enforcement authorities in Europe according to which private companies are indispensable partners in the fight against “cyber-enabled” crimes as they are often in possession of personal data relevant for law enforcement operations. For that reason, police authorities increasingly attempt to lay hands on data held by companies – sometimes in disregard to the safeguards imposed by long-standing judicial cooperation mechanisms. Several initiatives at European Union (EU) level, like the proposed regulation on European Production and Preservation Orders for electronic evidence in criminal matters (so called “e-evidence” Regulation), seek to “facilitate” that access to personal data by national law enforcement authorities. Now it’s Europol’s turn.

The Europol Regulation entered into force in 2017, authorising the European Police Cooperation Agency (Europol) to “receive” (but not directly request) personal data from private parties like Facebook and Twitter directly. The goal was to enable Europol to gather personal data, feed it into its databases and support Member States in their criminal investigations. The Commission was supposed to specifically evaluate this practice of reception and transfer of personal data with private companies after two years of implementation (in May 2019). However, there is no public information on whether the Commission actually conducted such evaluation, what were its modalities as well as its results.

Regardless of the absence of this assessment’s results and of a fully-fledged evaluation of Europol’s mandate, the Commission and the Council consider the current legal framework as too limiting and therefore decided to revise it. The legislative proposal for a new Europol Regulation is planned to be released at the end of this year.

One of the main policy option foreseen is to lift the ban on Europol’s ability to proactively request data from private companies or query databases managed by private parties (e.g. WHOIS). However, disclosures by private actors would remain “voluntary”. Just as the EU Internet Referral Unit operates without any procedural safeguards or strong judicial oversight, this extension of Europol’s executive powers would barely comply with the EU Charter of Fundamental Rights – that requires that restrictions of fundamental rights (on the right to privacy in this case) must be necessary, proportionate and “provided for by law” (rather than on ad hoc “cooperation” arrangements).

This is why, in light of the Commission’s consultation call, EDRi shared the following remarks:

  • EDRi recommends to first carry out a full evaluation of the 2016 Europol Regulation, before expanding the agency’s powers, in order to base the revision of its mandate on proper evidence;
  • EDRi opposes the Commission’s proposal to expand Europol’s powers in the field of data exchange with private parties as it goes beyond Europol’s legal basis (Article 88(2));
  • The extension of Europol’s mandate to request personal data from private parties promotes the voluntary disclosure of personal data by online service providers which goes against the EU Charter of Fundamental Rights and national and European procedural safeguards;
  • The procedure by which Europol accesses EU databases should be reviewed and include the involvement of an independent judicial authority;
  • The Europol Regulation should grant the Joint Parliamentary Scrutiny Group with real oversight powers.

Read our full contribution to the consultation here.

Read more:

Europol: Non-transparent cooperation with IT companies (18.05.16)
https://edri.org/europol-non-transparent-cooperation-with-it-companies/

Europol: Delete criminals’ data, but keep watch on the innocent (27.03.18)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

Oversight of the new Europol regulation likely to remain superficial (12.07.16)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

(Contribution by Chloé Berthélémy, EDRi policy advisor)

close
24 Jun 2020

French Avia law declared unconstitutional: what does this teach us at EU level?

By Chloé Berthélémy

On 18 June, the French Constitutional Council, the constitutional authority in France, declared the main provisions of the “Avia law” unconstitutional. France’s legislation on hate speech was adopted in May despite being severely criticised from nearly all sides: the European Commission, the Czech Republic, digital rights organisations and LBGTQI+, feminist and antiracist organisations. Opposed to the main measures throughout the legislative process, the French Senate brought the law before the Constitutional Council as soon as it was adopted.

The Court’s ruling represents a major victory for digital freedoms, not only for French people, but potentially for all Europeans. In past years, France has been championing its law enforcement model for the fight against (potentially) illegal online content at the European Union (EU) level, especially in the framework of the Terrorist Content Regulation, currently in hard-nosed negotiations. The setback received after the Constitutional Court’s decision will likely re-shuffle the cards in the current and future European content regulation-related files.

The Avia law is “not necessary, appropriate and proportionate”

In its decision, the Constitutional Council held that certain provisions infringe “on freedom of speech and communication, and are not necessary, appropriate and proportionate to the aim pursued”. Looking at the details of the ruling, the following legal measures in the law that were used to strike down seemingly illegal content were quashed by the Court:

  • The sort of “notice-and-action” system by which any user can flag “manifestly illegal” content (among a long pre-set list of offenses) and the notified online service provider is required to remove it within 24 hours,
  • The reduction of the intermediary’s deadline to remove illegal terrorist content and child sexual abuse material to one hour after the receipt of a notification by an administrative authority.
  • All the best-efforts obligations linked to the unconstitutional removal measures above such as transparency obligations (in terms of access to redress mechanisms and content moderation practices, including the number of removed content, the rate of wrong takedowns,…)
  • The power given to the Conseil supérieur de l’audiovisuel (ie. French High Audiovisual Council) with an oversight mandate to monitor the implementation of those best-efforts obligations.

Plot twist!

The Court’s decision will have a decisive impact on the European negotiations on the draft Regulation against the dissemination of terrorist content online. The European Commission hastily published the draft legislation under pressure from France and Germany in 2018 looking towards a quick adoption to serve the Commission’s electoral communication strategy. However, since the trilogues started, the European Parliament and the Council of Member States have been facing a persistent deadlock regarding the proposal’s main measures.

In this context, the Constitutional Council’s ruling comes as a massive blow in the Commission’s and France’s well-rounded advocacy. In particular, France has been pushing to expand the definition of what constitutes a “competent authority” (institutions with legal authority to make content determinations) under the Regulation to include administrative (aka law enforcement) authorities. Consequently, law enforcement agents would be allowed to issue orders to remove or disable access to illegal terrorist content within an hour. The Council declared this type of measure as a clear breach of the French Constitution, pointing out the lack of judiciary involvement in the decision to determine whether a specific content published is illegal or not, and the incentives (in the form of strict deadlines and heavy sanctions) to over zealously block perfectly legal speech. It draws similar conclusions for the legal arrangements that address potential hate speech.

In general, the Council underlines that only the removal of manifestly illegal content can be ordered without a judge’s prior authorization. However, assessing that a certain piece of content is manifestly illegal requires a minimum of analysis, which is impossible in such a short time frame. Inevitably, this decision weakens the pro-censorship hardliners’ position in European debates.

Ahead of the Digital Services Act, a legislative package which will update the EU rules governing online service providers’ responsibilities, the European legislators should pay particular attention to this ruling to guarantee the respect of fundamental rights. EDRi and its members will continue to monitor the development of these files and engage with the institutions in the upcoming period.

Read more:

(In French) La Quadrature Du Net, Loi haine: le Conseil constitutionnel refuse la censure sans juge (18.06.2020)
https://www.laquadrature.net/2020/06/18/loi-haine-le-conseil-constitutionnel-refuse-la-censure-sans-juge/

EFF, Victory! French High Court Rules That Most of Hate Speech Bill Would Undermine Free Expression (18.06.2020)
https://www.eff.org/press/releases/victory-french-high-court-rules-most-hate-speech-bill-would-undermine-free-expression

Constitutional Council declares French hate speech ‘Avia’ law unconstitutional (18.06.2020)
https://www.article19.org/resources/france-constitutional-council-declares-french-hate-speech-avialaw-unconstitutional/

France’s law on hate speech gets a thumbs down (04.12.2019)
https://edri.org/frances-law-on-hate-speech-gets-thumbs-down/

(Contribution by Chloé Berthélémy, EDRi Policy Advisor)

close
13 May 2020

COVID-Tech: COVID infodemic and the lure of censorship

By Chloé Berthélémy

In EDRi’s series on COVID-19, COVIDTech, we will explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network’s statement on the virus. Each post in this series will tackle a specific issue at the intersection of digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis. In our statement, we emphasised the principle that states must “defend freedom of expression and information”. In this second post of the series, we take a look at the impact on freedom of expression and information that the measures to fight the spread of misinformation could have. Automated tools, content-analysing algorithms, state-sponsored content moderation, all have become normal under COVID-19, and it is a threat to many of our essential fundamental rights.

We already knew that social media companies perform pretty badly when it comes to moderate content on their platforms. Regardless of the measures they deploy (whether using automated processes or employing human moderators), they make discriminatory and arbitrary decisions. They fail to understand context and cultural and linguistic nuances. Lastly, they provide no proper effective access to remedies.

In times of a global health crisis where accessing vital health information, keeping social contact and building solidarity networks are so important, online communications, including social media and other content hosting services, have become even more essential tools. Unfortunately, they are also vectors of disinformation and misinformation that erupt in such exceptional situations and threaten public safety and governmental responses. However, private companies – whether voluntarily or pressured by governments – should not impose over-strict, vague, or unpredictable restrictions on people’s conversations about important topics.

Automated tools don’t work: what a surprise!

As the COVID-19 crisis broke out, emergency health guidelines forced big social media companies to send their content moderators home. Facebook and the like promised to live up to expectations by basing daily content moderation on their so-called artificial intelligence. It only took a few hours to observe glitches in the system.

Their “anti-spam” system was striking down quality COVID-19 content from trustworthy sources as violations of the platforms’ community guidelines. Sharing newspaper articles, links to official governmental websites or simply mentioning the term “coronavirus” in a post would result in having your content preemptively blocked.

This whole trend perfectly demonstrates why relying on automated processes can only be detrimental to freedom of expression and to freedom of receiving and imparting information. The current context led even the Alan Turing Institute to suggest that content moderators should be considered “key workers” in the context of the COVID-19 pandemic.

Content filters show high margins of error and are prone to over-censoring. Yet the European Parliament adopted a resolution on the EU’s response to the pandemic which calls on social network companies to proactively monitor and “stop disinformation and hate speech”. In the meantime, the European Commission continues its “voluntary approach” with the social media platforms and contemplates the possibility to propose soon a regulation.

Criminalising misinformation: a step too far

In order to swiftly respond to the spreading of COVID-19 health crisis, some Member States desperately try to control the flow of the information about the spread of the virus. In their efforts, they are seduced by the adoption of hasty legislation that criminalises disinformation and misinformation which may ultimately lead to state sponsored censorship and suppression of public discourse. For instance, Romania granted new powers to its National Authority for Administration and Regulation in Communications to order take-down notices for websites containing “fake news”. A draft legislation in its neighbour Bulgaria originally included the criminalisation of the spread of “internet misinformation” with fines of up to 1,000 euros and even imprisonment of up to three years. In Hungary, new emergency measures include the prosecution and potential imprisonment of those who spread “false” information.

The risks of abuse of such measures and unjustified interference with the right to freedom of expression directly impair the media’s ability to provide objective and critical information to the public, which is crucial for individuals’ well-being in times of national health crisis. While extraordinary situations definitely require extraordinary measures, they have to remain proportional, necessary and legitimate. Both the EU and Member States must refrain from undue interference and censorship and instead focus on measures that promote media literacy and protect and support diverse media both online and offline.

None of the approaches taken so far show a comprehensive understanding of the mechanisms that enable the creation, amplification and dissemination of disinformation as a result of curation algorithms and online advertising models. It is extremely risky for a democratic society to rely only on very few communications channels, owned by private actors of which the business model feeds itself from sensationalism and shock.

The emergency measures that are being adopted in the fight against COVID-19 health crisis will determine how European democracies will look like in its aftermath. The upcoming Digital Services Act (DSA) is a great opportunity for the EU to address the monopolisation of our online communication space. Further action should be done specifically in relation to the micro-targeting practices of the online advertising industry (Ad Tech). This crisis also showed to us that the DSA needs to create meaningful transparency obligations for better understanding of the use of automation and for future research –starting with transparency reports that include information about content blocking and removal.

What we need for a healthy public debate online are not gatekeepers entitled by governments to restrict content as in non-transparent and arbitrary manner. Instead, we need diversified, community-led and user-empowering initiatives, that allow everyone to contribute and participate.

Read more:

Joint report by Access Now, Civil Liberties Union for Europe, European Digital Rights, Informing the “disinformation” debate (18.10.18)
https://edri.org/files/online_disinformation.pdf

Access Now, Fighting misinformation and defending free expression during COVID-19: Recommendations for States (21.04.20) https://www.accessnow.org/cms/assets/uploads/2020/04/fighting-misinformation-and-defending-free-expression-during-covid-19-recommendations-for-states-1.pdf

Digital rights as a security objective: Fighting disinformation (05.12.18)
https://edri.org/digital-rights-as-a-security-objective-fighting-disinformation/

ENDitorial: The fake fight against fake news (25.07.18)
https://edri.org/enditorial-the-fake-fight-against-fake-news/

(Contribution by Chloé Berthélémy, EDRi Policy Advisor)

close
29 Apr 2020

Everything you need to know about the DSA

By Chloé Berthélémy

In her political guidelines, the President of the European Commission Ursula von der Leyen has committed to “upgrade the Union’s liability and safety rules for digital platforms, services and products, with a new Digital Services Act” (DSA). The upcoming DSA will revise the rules contained in the E-Commerce Directive of 2000 that affect how intermediaries regulate and influence user activity on their platforms, including people’s ability to exercise their rights and freedoms online. This is why reforming those rules has the potential to be either a big threat to fundamental rights rights or a major improvement of the current situation online. It is also an opportunity for the European Union to decide how central aspects of the internet will look in the coming ten years.

A public consultation by the European Commission is planned to be launched in May 2020 and legislative proposals are expected to be presented in the first quarter of 2021.

In the meantime, three different Committees of the European Parliament have announced or published Own Initiative Reports as well as Opinions in view of setting the agenda of what the DSA should regulate and how it should achieve its goals.

We have created a document pool in which we will be listing relevant articles and documents related to the DSA. This will allow you to follow the developments of content moderation and regulatory actions in Europe.

Read more:

Document pool: Digital Service Act (27. 04. 2020)
https://edri.org/digital-service-act-document-pool/

close
11 Mar 2020

Stuck under a cloud of suspicion: Profiling in the EU

By Chloé Berthélémy

As facial recognition technologies are gradually rolled out in police departments across Europe, anti-racism groups blow the whistle on the discriminatory over-policing of racialised communities linked to the increasing use of new technologies by law enforcement agents. In a report by the European Network Against Racism (ENAR) and the Open Society Justice Initiative, daily police practices supported by specific technologies – such as crime analytics, the use of mobile fingerprinting scanners, social media monitoring and mobile phone extraction – are analysed, to uncover their disproportionate impact on racialised communities.

Beside these local and national policing practices, the European Union (EU) has also played an important role in developing police cooperation tools that are based on data-driven profiling. Exploiting the narrative according to which criminals abuse the Schengen and free movement area, the EU justifies the mass monitoring of the population and profiling techniques as part of its Security Agenda. Unfortunately, no proper democratic debate is taking place before the technologies are deployed.

What is profiling in law enforcement?

Profiling is a technique whereby a large amount of data is extracted (“data mining”) and analysed (“processing”) to draw up certain patterns or types of behaviour that help classify individuals. In the context of security policies, some of these categories are then labeled as “presenting a risk”, and needing further examination – either by a human or another machine. Thus it works as a filter applied to the results of a general monitoring of everyone. It lies at the root of predictive policing.

In Europe, data-driven profiling, used mostly for security purposes spiked in the immediate wake of terrorist attacks such as the 2004 Madrid and 2005 London attacks. As a result, EU counter-terrorism and internal security policies – and their underlying policing practices and tools – are informed by racialised assumptions, including specifically anti-Muslim and anti-migrant sentiments, leading to racial profiling. Contrary to what security and law enforcement agencies claim, the technology is not immune to those discriminatory biases and not objective in its endeavour to prevent crime.

European initiatives

The EU has been actively supporting profiling practices. First, the Anti-Money Laundering and Counter-Terrorism Directives oblige private actors such as banks, auditors and notaries to report suspicious transactions that might be linked to money laundering or terrorist financing, as well as to establish risk assessment procedures. “Potentially risky” profiles are created on risk factors which are not always chosen objectively, but rather based on racialised prejudice of what constitutes an “abnormal financial activity”. As a consequence, among individuals matching this profile, there is usually an over-representation of migrants, cross-border workers and asylum seekers.

Another example is the Passenger Name Record (PNR) Directive of 2016. The Directive imposes airline companies to collect all personal data of people traveling from EU territory to third countries and to share it among all EU Member States. The aim is to identify certain categories of passengers as “high-risk passengers” that need further investigation. There are ongoing discussions on the possibility to extend this system to rail transportation and other public transports.

More recently, the multiplication of EU databases in the field of migration control and their interconnection facilitated the incorporation of profiling techniques to analyse and cherry-pick “good” candidates. For example, the Visa Information System, a proposal currently on a fast-track, consists of a database that currently holds up to 74 million short- and long-stay visa applications which are run against a set of “risk indicators”. Such “risk indicators” consist of a combination of data including the age range, sex, nationality, the country and city of residence, the EU Member State of first entry, the purpose of travel, and the current occupation. The same logic is applied in the European Travel Information and Authorisation System (ETIAS), a tool slated for 2022 aimed at gathering data about third-country nationals who do not require a visa to travel to the Schengen area. The risk indicators used in that system also aim at “pointing to security, illegal immigration or high epidemic risks”.

Why are fundamental rights in danger?

Profiling practices rely on the massive collection and processing of personal data, which represent a great risk for the rights to privacy and data protection. Since most policing instruments pursue public security interest, they are considered legitimate. However, few actually meet transparency and accountability requirements and thus, are difficult to audit. The essential legality tests of necessity and proportionality prescribed by the EU Charter of Fundamental Rights cannot be carried out: only a concrete danger – not the potentiality of one – can justify interferences with the rights to respect for private life and data protection.

In particular, the criteria used to determine which profiles need further examination are opaque and difficult to evaluate. Questions are: what categories and what data are being selected and evaluated? By whom? Talking about the ETIAS system, the EU Fundamental Rights Agency stressed that the possibility of using risk indicators without resulting in discriminating against certain categories of people in transit was unclear, and therefore recommended to postpone the use of profiling techniques. Generalising entire groups of persons based on specific grounds is definitely something to check against the right to non-discrimination. Further, it is troublesome that the missions of evaluation and monitoring of profiling practices are given to “advisory and guiding boards” that are hosted by law enforcement agencies such as Frontex. Excluding data protection supervisory authorities and democratic oversight bodies from this process is very problematic.

Turning several neutral features or conducts into signs of an undesirable or even mistrusted profile can have dramatic consequences for the life of individuals. The consequences of having your features match a “suspicious profile” can lead to restrictions of your rights. For example in the area of counter-terrorism, your right to effective remedies and a fair trial can be hampered; as you are usually not aware that you have been placed under surveillance as a result of a match in the system, and you find yourself unable to contest such a measure.

As law enforcement across Europe increasingly conduct profiling practices, it is crucial that substantive safeguards are put in place to mitigate the many dangers for the individuals’ rights and freedoms they entail.

Data-driven policing: the hardwiring of discriminatory policing practices across Europe (19.11.2019)
https://www.enar-eu.org/IMG/pdf/data-driven-profiling-web-final.pdf

New legal framework for predictive policing in Denmark (22.02.2017)
https://edri.org/new-legal-framework-for-predictive-policing-in-denmark/

Data Protection, Immigration Enforcement and Fundamental Rights: What the EU’s Regulations on Interoperability Mean for People with Irregular Status (14.11.2019)
https://www.statewatch.org/analyses/Data-Protection-Immigration-Enforcement-and-Fundamental-Rights-Full-Report-EN.pdf

Preventing unlawful profiling today and in the future: a guide (14.12.2018)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-preventing-unlawful-profiling-guide_en.pdf

(Contribution by Chloé Berthélémy, EDRi)

close
04 Mar 2020

E-evidence and human rights: The Parliament is not quite there yet

By Chloé Berthélémy

The European Parliament Committee on Civil Liberties (LIBE) is currently busy working out a compromise between its different political groups in order to establish a common position on the “e-evidence” Regulation. It is an important step of the legislative process since the Parliament’s position will be the only bulwark standing between the proper protection of human rights in cross-border law enforcement and the Council’s and the Commission’s highly problematic e-evidence ideas.

To ensure that fundamental rights are protected when law enforcement authorities in an EU Member State act outside their own country, the Parliament compromise should do the following:

Do: The authority in the executing Member State – and where applicable the affected State – should be obliged to confirm or reject an order before the online service provider can execute it. Some compromises on the table suggest that after a certain period of time (for now, 10 days) without a reaction from the executing authority, the service provider should simply assume green light. This is a risky shortcut because it creates the incentive for an underfunded and understaffed executing authority to ignore requests and let the 10-day deadline pass without action. Given that many service providers are currently based in Ireland, whose law enforcement authorities are disproportionately flooded by these requests, this is a very real scenario which would practically annul many of the otherwise very important human rights safeguards built in by the Member of the European Parliament (MEP) Birgit Sippel, Rapporteur of this file in LIBE Committee. In practice, it would mean that the service providers become the ultimate safety net against abusive requests and fundamental rights breaches. The explicit approval by the executing authority must therefore be mandatory before an order can be executed.

❌ Don’t: Some compromise proposals imply that orders to access subscriber data would have no suspensive effect on the handing out of data. Those proposals assume that – if an order is found invalid at a later state – the data could simply be declared inadmissible in court and be deleted by the issuing authority. In practice, however, this notion is misleading, at best. Once an investigating police officer learns a suspect’s identity, they will not be able to unknow that identity just because the subscriber data they learned it from has been declared inadmissible and is deleted. What is worse, if indeed the knowledge of the identity itself was declared inadmissible, the whole investigation would collapse. In order to ensure legal certainty for investigating officers, again, an order should not be executed without the authorisation of the executing authority. This procedural requirement should apply for all types of data and orders.

Do: Just like the service provider is given the possibility to refuse an order because it is manifestly abusive, the executing authority should be obliged to check that the order is proportionate as part of its refusal grounds checklist.

❌ Don’t: Considering the state of the rule of law in certain EU Member States, executing an order from a State where the independence of the judiciary is not guaranteed would be incredibly risky for the protection of fundamental rights. In line with the jurisprudence of the Court of Justice of the EU on the independence of judicial authorities, any executing authority should refuse to execute orders from States subjected to Article 7 proceedings.

If adopted, these changes would strengthen the Parliament’s Report and ensure it is able to defend citizens’ rights during the e-evidence negotiations with the Commission and the Council.

EU Council’s general approach on “e-evidence”: From bad to worse (19.12.2019)
https://edri.org/eu-councils-general-approach-on-e-evidence-from-bad-to-worse/

Cross-border access to data for law enforcement: Document pool
https://edri.org/cross-border-access-to-data-for-law-enforcement-document-pool/

close
26 Feb 2020

Click here to allow notifications in cross-border access to data

By Chloé Berthélémy

From a fundamental rights perspective, it’s essential that the proposal enabling cross-border access to data for criminal proceedings (“e-evidence”) includes a notification mechanism. However, this requirement of a notification seems to be out of the question for those advocating for “efficiency” of cross-border criminal investigations, even if that means abandoning the most basic procedural safeguards that exist in the European judicial cooperation. Another argument against notifying at least one second authority is that the system would be “too complicated”.

To solve these intricacies, others (with similar goals in mind) have proposed to restrict the scope of the future legal instrument to “domestic” cases only. This means that the Production and Preservation Orders would only be used in cases where the person whose data is being sought is residing in the country of the issuing authority. For example, if a French prosecutor is investigating a case involving a French resident as a suspect, she would not need to request the approval of another State. Only the executing State – where the online service provider is established – would be required to intervene exclusively in cases where the provider refuses to execute the Order.

“Domestic” cross-border cases?

Traditionally, EU Member States execute their own national laws to summon service providers established on their territory to hand over their customers’ data for criminal matters. However, when the evidence that is being searched for is located in another Member State, European Union’s rules for judicial cooperation kick in. The new regime, as proposed by the EU Commission, allows Member States to extend their judicial jurisdiction beyond their territorial boundaries. This affects the sovereignty of other states – and this is why we talk about cross-border access to data and cannot refer to “purely domestic cases”.

The notification of a second authority, notably the executing authority, is essential for the new instrument to be compatible with EU primary (the EU treaties) and secondary law. For the principle of mutual recognition – which is the legal basis chosen for the proposal – to apply, it is indeed crucial that the executing State is first aware that an order is executed on its territory, before it is able to recognise it.

Notification to the executing authority

Some stakeholders in the e-evidence debate grumble about the administrative burden that a notification procedure would entail. They further underline the problematic situation it would create for Irish judicial authorities since Ireland hosts so many prevailing service providers. However, there are several counter-arguments to these claims:

  1. The proposal does not solely cover circumstances in which Ireland will be involved but all cross-border cases in the EU;
  2. It is vital to design policies that are future-proof. The current lack of European harmonisation in the field of taxation policies impacts the efficiency of judicial cooperation instruments. This doesn’t, however, mean it will always be that way. It is also not a satisfactory justification for bypassing fundamental rights safeguards;
  3. Service providers should not be required to execute orders that are unlawful under the law of the country where they are located;
  4. In cases where the affected person’s residence is unknown, a notification mechanism to access identifying information would be imperative, because there would be no indication at this stage of the investigation as to whether there is a third State concerned or not.

There are cases in which the affected person is residing somewhere else than in the issuing State. Excluding these cases from the e-evidence proposal’s scope would allow a relief for law enforcement authorities from a couple of additional legal checks – that would allow them, in particular, to avoid the verification of immunities or other specific protections that are granted by national laws, and that restrict the access to certain categories of personal data. Nonetheless, excluding those cases would not suffice to meet the obligation to respect fundamental rights and rule of law standards provided in the EU legal system when using mutual recognition instruments. Instead, a notification mechanism with an obligation to confirm or refuse the order for both the executing and the affected States should feature in the final text.

Double legality check in e-evidence: Bye bye “direct data requests” (12.02.2020)
https://edri.org/double-legality-check-in-e-evidence-bye-bye-direct-data-requests/

“E-evidence”: Repairing the unrepairable (14.11.2019)
https://edri.org/e-evidence-repairing-the-unrepairable/

Independent study reveals the pitfalls of “e-evidence” proposals (10.10.2018)
https://edri.org/independent-study-reveals-the-pitfalls-of-e-evidence-proposals/

EU “e-evidence” proposals turn service providers into judicial authorities (17.04.2018)
https://edri.org/eu-e-evidence-proposals-turn-service-providers-into-judicial-authorities/

(Contribution by Chloé Berthélémy, EDRi)

close
12 Feb 2020

Double legality check in e-evidence: Bye bye “direct data requests”

By Chloé Berthélémy

After having tabled some 600 additional amendments, members of the European Parliament Committee on Civil Liberties (LIBE) are still discussing the conditions under which law enforcement authorities in the EU should access data for their criminal investigations in cross-border cases. One of the key areas of debate is the involvement of a second authority in the access process – usually the judicial authority in the State in which the online service provider is based (often called the “executing State”).

To prevent the misuse of this new cross-border data access instrument, LIBE Committee Rapporteur Birgit Sippel’s draft Report had angered the Commission by proposing that the executing State should receive, by default, the European Preservation or Production Order at the same time as the service provider. It should then have ten days to evaluate and possibly object to an Order by invoking one of the grounds for non-recognition or non-execution – including based on a breach of the EU Charter of Fundamental Rights.

What is more, the Sippel Report proposes that if it is clear from the early stages of the investigation that a suspected person does neither reside in the Member State that is seeking data access (the issuing State) nor in the executing State where the service provider is established, the judicial authorities of the State in which the person resides (the affected State) should also get the chance to intervene.

Notification as a fundamental element of EU judicial cooperation

The reasoning behind such a notification system is compelling: Entrusting one single authority to carry out the full legality and proportionality assessment for two or even three different jurisdictions (the issuing, the executing and the affected State) is careless at best. A national prosecutor or judge alone cannot possibly take into account all national security and defence interests, immunities and privileges and the legal framework of the other Member States, nor the special protections a suspected person may have in their capacity as a lawyer, doctor or journalist. This is especially relevant if the other Member States’ rules are different or even incompatible with the rules of the prosecutor’s own domestic investigation. The examination of a second judicial authority with a genuine possibility to review the Order is therefore of paramount importance to ensure its legality.

The LIBE Committee is currently discussing the details of this notification process. Some amendments that were tabled are unfortunately trying to undermine the protections that the notification requirement would bring. For example, some try to restrict the notification to Production Orders only (when data is transmitted directly), excluding all Preservation Orders (when the data is just frozen and needs to be acquired with a separate Order). Others try to limit notification to transactional data (aka metadata) or content data, alleging that subscriber data is somehow less sensitive and therefore needs less protection. Lastly, some propose that the notification does not have suspensive effects on the obligations of the service provider to respond to an order, meaning that if the notified State objects to an order and the service provider already gave out the data, it is too late.

The Parliament should uphold the basic principles of human rights law

If accepted, some of those amendments would bring the Parliament position dangerously close to the Council’s highly problematic weak notification model which does not provide any of the necessary safeguards it is supposed to have. To ensure the human rights compliance of the procedure, notifying the executing and the affected State should be mandatory for all types of data and Orders. Notifications should be simultaneously sent to the relevant judicial authority and the online service provider, and the latter should wait for a positive reaction from the former before executing the Order. The affected State should have the same grounds for refusal as the executing State, because it is best placed to protect its residents and their rights.

There seems to be a general consensus in the European Parliament about the involvement of a second judicial authority in the issuance of Orders. Meanwhile, the Commission grits its teeth and continues to pretend that mutual trust among EU Member States is all that is needed to protect people from law enforcement overreach. So far, the Commission seems to refuse to see the tremendous risks that its “e-evidence” proposal entails – especially in a context where some Member States are subjected to Article 7 proceedings which could lead to the suspension of some of their rights as Member States, because of endangered independence of their judicial systems and potential breaches of the rule of law. Mutual trust should not serve as an excuse to undermine individuals’ fundamental right to data protection and the basic principles of human rights law.

Cross-border access to data for law enforcement: Document pool
https://edri.org/cross-border-access-to-data-for-law-enforcement-document-pool/

“E-evidence”: Repairing the unrepairable (14.11.2019)
https://edri.org/e-evidence-repairing-the-unrepairable/

EU rushes into e-evidence negotiations without common position (19.06.2019)
https://edri.org/eu-rushes-into-e-evidence-negotiations-without-common-position/

Recommendations on cross-border access to data (25.04.2019)
https://edri.org/files/e-evidence/20190425-EDRi_PositionPaper_e-evidence_final.pdf

(Contribution by Chloé Berthélémy, EDRi)

close
18 Dec 2019

Online content moderation: Where does the Commission stand?

By Chloé Berthélémy

The informal discussions (trilogues) between the European Parliament, the Council of the European Union and the European Commission are progressing on the Terrorist Content Regulation (TCO, aka “TERREG”). While users’ safeguards and rights-protective measures remain the Parliament’s red lines, the Commission presses the co-legislators to adopt what was a pre-elections public relations exercise, rather than an urgently needed piece of legislation. Meanwhile, the same European Commission just delivered a detailed opinion to France criticising its currently debated hate speech law (“Avia law”). The contrast between the Commission’s positions supporting certain measures in the Terrorist Content Regulation and opposing similar ones in the French Avia law is so striking that it is difficult to believe they come from the same institution.

Scope of targeted internet companies

In its letter to the French government, the Commission mentions that “it is not certain that all online platforms in the scope of the notified project […] pose a serious and grave risk” in light of the objective of fighting hate speech online. The Commission also notes that the proportionality of the envisaged measures is doubtful and is missing a clear impact assessment, especially for small and medium-sized enterprises (SMEs) established in other EU Member States.

These considerations for proportionate and targeted legislative measures have completely vanished in the context of the Terrorist Content Regulation. The definition set out in the Commission’s draft Regulation is too broad and covers an extremely large, diverse and unpredictable range of entities. Notably, it covers even small communications platforms with a very limited number of users. The Commission asserts that terrorist content is currently being disseminated over smaller sites, and therefore, the Regulation obliges them “to take responsibility in this area”.

What justifies these two very different approaches to a similar problem? That is not clear: On the one hand, the Commission denounces a missing evaluation that an obligation to adopt measures preventing the redistribution of illegal content (“re-upload filters”) in the Avia law would have on European SMEs. On the other hand, it does not provide any analysis in its impact assessment on the Terrorist Content Regulation of the costs that would entail setting up hash databases for automated removal of content and still pushes for such “re-upload filters” in trilogues.

Expected reaction time frame for companies

The European Commission criticises the 24-hour deadline the French proposal introduces for companies to react to illegal content notifications. The Commission held that “any time limit set during which online platforms are required to act following notification of the presence of illegal content must also allow for flexibility in certain justified cases, for example where the nature of the content requires a more substantial assessment of its context that could not reasonably be made within the time limit set”. Considering the high fines in cases of non-compliance, the Commission believes it could place a disproportionate burden on companies and lead to an excessive deletion of content, thus undermining freedom of expression.

A year ago, the Commission strongly supported in the original TCO proposal the deletion of terrorist content online within one hour of receipt of a removal order. No exception for small companies was foreseen despite their limited resources to react in such short time frame, leaving them with no other choice than to pay the fines or apply automated processing if they have the means to do so. Although removal orders do not technically require the platform to review the notified content within one hour, the Commission’s proposal allows for any competent authority to issue such orders, even if they are not independent.

Terrorist content is as context-sensitive as hate speech

In the letter sent to the French government on the Avia law, the Commission argues that the French proposal could lead to a breach of Article 15(1) of the E-Commerce Directive, as it would risk forcing online platforms to engage in an active search for hosted content in order to comply with the obligation to prevent the re-upload of already identified illegal hate speech. Again, the Commission regrets that the French authorities did not provide sufficient evidence that this measure is proportionate and necessary in relation to the impact on fundamental rights including the rights to privacy and data protection.

At the same time, the Commission (and the Council) seemed in the TCO Regulation uncompromising on the obligation put on platforms to use “proactive measures” (aka upload filters). As in the copyright Directive discussions, EDRi maintains strong reservations against the mandatory use of upload filters, since they are error prone, invasive and, likely to produce “false positives”, meaning nothing less than a profound danger for freedom of expression. For example, current filters used voluntarily by big platforms have taken down documentation of human rights violations and awareness-raising material against radicalisation.

The turn of the Commission position regarding online content in the Avia law sets a positive precedent regarding online content, including upcoming legislation in the Digital Services Act (DSA). We hope that the brand new Commission can keep a similar sensible approach in future proposals.

Recommendations for the European Parliament’s Draft Report on the Regulation on preventing the dissemination of terrorist content online (December 2018)
https://edri.org/files/counterterrorism/20190108_EDRipositionpaper_TERREG.pdf

Trilogues on terrorist content: Upload or re-upload filters? Eachy peachy. (17.10.2019)
https://edri.org/trilogues-on-terrorist-content-upload-or-re-upload-filters-eachy-peachy/

EU’s flawed arguments on terrorist content give big tech more power (24.10.2018)
https://edri.org/eus-flawed-arguments-on-terrorist-content-give-big-tech-more-power/

How security policy hijacks the Digital Single Market (02.10.2019)
https://edri.org/how-security-policy-hijacks-the-digital-single-market/

(Contribution by Chloé Berthélémy, EDRi)

close
04 Dec 2019

Interoperability: A way to escape toxic online environments

By Chloé Berthélémy

The political debate on the future Digital Services Act mostly revolves around the question of online hate speech and how to best counter it. Whether based on state intervention or self-regulatory efforts, the solutions to address this legitimate public policy objective will be manifold. In its letter to France criticising the draft legislation on hateful content, the European Commission itself acknowledged that simply pushing companies to remove excessive amounts of content is undesirable and that the use of automatic filters is ineffective in front of such a complex issue. Out of the range of solutions that could help to tackle online hate speech and foster spaces for free expression, a legislative requirement for certain market-dominant actors to be interoperable with their competitors would be an effective measure.

Trapped in walled gardens

Originally, the internet enabled everyone to interact with one another thanks to a series of standardised protocols. It allowed everyone to share knowledge, to help each other and be visible online. Because the internet infrastructure is open, anyone can create their own platform or means of communication and connect with others. However, the internet landscape dramatically changed in the last decade: the rise of Big Tech companies has resulted in a highly centralised online ecosystem around few dominant players.

Their power lies in their huge profits stemming from an opaque advertisement business and in the enormous user bases that drag even more users into their services. Unlike the historical openness of the internet, these companies strive for even higher profits by closing their systems and locking their customers in. Hence, the costs to leave are too high for many to actually take the leap. This gives companies an absolute control over all the interactions taking place and the content posted on their services.

Facebook, Twitter, YouTube or LinkedIn decide for you what you should see next, track each action you make to profile you and decide whether your post hurts their commercial interests, and therefore should be removed, or not. In that context, you have no say in the making of the rules.

Unhealthy communications

What is most profitable for those companies is content that generates the most profiling data possible – arising from each user interaction. In that regard, pushing offensive, polarising and shocking content is the best strategy to capture users’ attention and trigger a reaction from them. Combining this with a system of rewards – likes, views, thumbs up – for the ones using the same rhetorical strategies, these companies provide a fertile ground for conflict and the spread of hate. In this toxic environment based on arbitrary decisions, this business model explains how death threats against women can thrive while LGBTQIA+ people are censored when discussing queer issues. Entire communities of users are dependent on the goodwill of the intermediaries and must endure sudden changes of “community guidelines” without any possibility to contest them.

When reflecting on the dissemination mechanisms of hate speech and violent content online, it is easy to understand that delegating the task to protect victims to these very same companies is absolutely counter-intuitive. However, national governments and the EU institutions have mainly chosen this regulatory path.

Where is the emergency exit?

Another way would be to support the development of platforms with various commercial practices, degrees of user protection and content regulation standards. Such a diversified online ecosystem would give users a genuine choice of alternative spaces that fit their needs and even allow them to create their own with chosen community rules. The key to this system’s success would be to maintain the link with the other social media platforms, where most of their friends and families still remain. Interoperability would guarantee to everyone the possibility to leave without losing their social connections and to join another network where spreading hate is not as lucrative. On the one hand, interoperability can help escape the overrepresentation of hateful content and dictatorial moderation rules, while on the other hand, it triggers the creation of human-scaled, open but safe spaces of expression.

The way it works allows a user on service A to interact with, read and show content to users on a service B. This is technically well feasible: For example, Facebook used to build its messaging service on an open protocol before 2015 and Twitter still permits users to tweet directly from third-party websites. It is crucial that this discussion takes place in the wider debate around the Digital Services Act: the questions of who decides how we consume daily news, how we connect with friends online and how we choose our functionalities should not be left to a couple of dominant companies.

Content regulation – what’s the (online) harm? (09.10.2019)
https://edri.org/content-regulation-whats-the-online-harm/

France’s law on hate speech gets a thumbs down
https://edri.org/frances-law-on-hate-speech-gets-thumbs-down

Hate speech online: Lessons for protecting free expression (29.10.2019)
https://edri.org/hate-speech-online-lessons-for-protecting-free-expression/

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

(Contribution by Chloé Berthélémy, EDRi)

close