Stuck under a cloud of suspicion: Profiling in the EU
As facial recognition technologies are gradually rolled out in police departments across Europe, anti-racism groups blow the whistle on the discriminatory over-policing of racialised communities linked to the increasing use of new technologies by law enforcement agents.
As facial recognition technologies are gradually rolled out in police departments across Europe, anti-racism groups blow the whistle on the discriminatory over-policing of racialised communities linked to the increasing use of new technologies by law enforcement agents. In a report by the European Network Against Racism (ENAR) and the Open Society Justice Initiative, daily police practices supported by specific technologies – such as crime analytics, the use of mobile fingerprinting scanners, social media monitoring and mobile phone extraction – are analysed, to uncover their disproportionate impact on racialised communities.
Beside these local and national policing practices, the European Union (EU) has also played an important role in developing police cooperation tools that are based on data-driven profiling. Exploiting the narrative according to which criminals abuse the Schengen and free movement area, the EU justifies the mass monitoring of the population and profiling techniques as part of its Security Agenda. Unfortunately, no proper democratic debate is taking place before the technologies are deployed.
What is profiling in law enforcement?
Profiling is a technique whereby a large amount of data is extracted (“data mining”) and analysed (“processing”) to draw up certain patterns or types of behaviour that help classify individuals. In the context of security policies, some of these categories are then labeled as “presenting a risk”, and needing further examination – either by a human or another machine. Thus it works as a filter applied to the results of a general monitoring of everyone. It lies at the root of predictive policing.
In Europe, data-driven profiling, used mostly for security purposes spiked in the immediate wake of terrorist attacks such as the 2004 Madrid and 2005 London attacks. As a result, EU counter-terrorism and internal security policies – and their underlying policing practices and tools – are informed by racialised assumptions, including specifically anti-Muslim and anti-migrant sentiments, leading to racial profiling. Contrary to what security and law enforcement agencies claim, the technology is not immune to those discriminatory biases and not objective in its endeavour to prevent crime.
European initiatives
The EU has been actively supporting profiling practices. First, the Anti-Money Laundering and Counter-Terrorism Directives oblige private actors such as banks, auditors and notaries to report suspicious transactions that might be linked to money laundering or terrorist financing, as well as to establish risk assessment procedures. “Potentially risky” profiles are created on risk factors which are not always chosen objectively, but rather based on racialised prejudice of what constitutes an “abnormal financial activity”. As a consequence, among individuals matching this profile, there is usually an over-representation of migrants, cross-border workers and asylum seekers.
Another example is the Passenger Name Record (PNR) Directive of 2016. The Directive imposes airline companies to collect all personal data of people traveling from EU territory to third countries and to share it among all EU Member States. The aim is to identify certain categories of passengers as “high-risk passengers” that need further investigation. There are ongoing discussions on the possibility to extend this system to rail transportation and other public transports.
More recently, the multiplication of EU databases in the field of migration control and their interconnection facilitated the incorporation of profiling techniques to analyse and cherry-pick “good” candidates. For example, the Visa Information System, a proposal currently on a fast-track, consists of a database that currently holds up to 74 million short- and long-stay visa applications which are run against a set of “risk indicators”. Such “risk indicators” consist of a combination of data including the age range, sex, nationality, the country and city of residence, the EU Member State of first entry, the purpose of travel, and the current occupation. The same logic is applied in the European Travel Information and Authorisation System (ETIAS), a tool slated for 2022 aimed at gathering data about third-country nationals who do not require a visa to travel to the Schengen area. The risk indicators used in that system also aim at “pointing to security, illegal immigration or high epidemic risks”.
Why are fundamental rights in danger?
Profiling practices rely on the massive collection and processing of personal data, which represent a great risk for the rights to privacy and data protection. Since most policing instruments pursue public security interest, they are considered legitimate. However, few actually meet transparency and accountability requirements and thus, are difficult to audit. The essential legality tests of necessity and proportionality prescribed by the EU Charter of Fundamental Rights cannot be carried out: only a concrete danger – not the potentiality of one – can justify interferences with the rights to respect for private life and data protection.
In particular, the criteria used to determine which profiles need further examination are opaque and difficult to evaluate. Questions are: what categories and what data are being selected and evaluated? By whom? Talking about the ETIAS system, the EU Fundamental Rights Agency stressed that the possibility of using risk indicators without resulting in discriminating against certain categories of people in transit was unclear, and therefore recommended to postpone the use of profiling techniques. Generalising entire groups of persons based on specific grounds is definitely something to check against the right to non-discrimination. Further, it is troublesome that the missions of evaluation and monitoring of profiling practices are given to “advisory and guiding boards” that are hosted by law enforcement agencies such as Frontex. Excluding data protection supervisory authorities and democratic oversight bodies from this process is very problematic.
Turning several neutral features or conducts into signs of an undesirable or even mistrusted profile can have dramatic consequences for the life of individuals. The consequences of having your features match a “suspicious profile” can lead to restrictions of your rights. For example in the area of counter-terrorism, your right to effective remedies and a fair trial can be hampered; as you are usually not aware that you have been placed under surveillance as a result of a match in the system, and you find yourself unable to contest such a measure.
As law enforcement across Europe increasingly conduct profiling practices, it is crucial that substantive safeguards are put in place to mitigate the many dangers for the individuals’ rights and freedoms they entail.
Data-driven policing: the hardwiring of discriminatory policing practices across Europe (19.11.2019)
https://www.enar-eu.org/IMG/pdf/data-driven-profiling-web-final.pdf
New legal framework for predictive policing in Denmark (22.02.2017)
https://edri.org/new-legal-framework-for-predictive-policing-in-denmark/
Data Protection, Immigration Enforcement and Fundamental Rights: What the EU’s Regulations on Interoperability Mean for People with Irregular Status (14.11.2019)
https://www.statewatch.org/analyses/Data-Protection-Immigration-Enforcement-and-Fundamental-Rights-Full-Report-EN.pdf
Preventing unlawful profiling today and in the future: a guide (14.12.2018)
https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-preventing-unlawful-profiling-guide_en.pdf
(Contribution by Chloé Berthélémy, EDRi)