Blogs | Privacy and data protection | Artificial intelligence (AI) | Profiling practices | Surveillance and data retention

Down with (discriminating) systems

Amidst a particularly hectic time for digital rights policy in Europe, there remains a large elephant in the room.

By EDRi · September 10, 2020

Europe is still pulsating with the repercussions of sustained global uprisings against racism following the murder of George Floyd. As the EU formulates its response in its upcoming ‘Action Plan on Racism’, EDRi outlines why it must address structural racism in technology as part of upcoming legislation.

Much of the discussion surrounding the ‘social’ side of technology focuses on the promise of a range of benefits that stem from digital innovation. The EU’s ongoing consultation on its AI legislation consistently reiterates the ‘wide array of economic and social benefits’ artificial intelligence can offer.

As our European societies increasingly wake up to the existence of structural, racial inequality, we have to ask ourselves, who will really feel the benefits of innovation? And what are the costs on human dignity and life that automated decision making systems will bring? What impact do they already have on existing systems of racism and discrimination?

AI presents huge potential for exacerbating racism in society, at a scale and to a degree of opacity unlike discrimination perpetuated by humans. Automated decision making has often been portrayed as neutral and ‘objective’, when in fact they neatly embed and amplify the underlying structural biases of our societies.

For example, increasing evidence demonstrates how new technologies developed and deployed in the field of law enforcement differentiate, target and experiment on communities at the margins. Another example is the increased use of both place-based and person-based “predictive policing” technologies to forecast where, and by whom, a narrow type of crimes are likely to be committed, that repeatedly score racialised communities with a higher likelihood of presumed future criminality. Such systems include the Dutch Crime Anticipation System, and the UK’s National Data Analytics Solution (‘NDAS’).

The various matrixes (the Gangs Matrix, ProKid-12 SI, the NDAS) dedicated to monitoring and data collection on future crime and ‘gangs’ target Black, Brown and Roma men and boys, highlighting discriminatory patterns on the basis of race and class. Not only do such systems infringe on the presumption of innocence and the fundamental right to privacy, they codify the notion that if you are of a certain race, you are suspicious and need to be watched. Predictive policing systems cause redirecting of policing toward certain areas, increasing the likelihood of, often fatal, encounters with the police.

Over-surveillance already occurs towards racialised groups, and undocumented people. In Europe, undocumented migrants are generally unable to avail themselves of data protection rights. This vulnerability is heightened due to the development of mass-scale, interoperable repositories of biometric data to facilitate immigration enforcement.

How can we address structural racism perpetuated through technology and the digital space? With the Action Plan on Racism and upcoming legislation on the Digital Services Act and artificial intelligence (AI), EDRi argues that the link between racism and technology cannot be ignored. In order to make sure that the ‘benefits’ of technology are felt by all of our societies equally, the EU must apply a racial justice lens to its future digital legislation and policy.

In our briefing ‘Structural racism, digital rights and technology’ EDRi calls on the EU to prevent abuses towards racialised communities by legally restricting impermissible uses of artificial intelligence, such as predictive policing, biometric surveillance, and uses of AI at the border.

There are numerous examples of this to draw from. In 2019, the city of San Francisco banned the use of facial recognition technology by police after racial justice activists highlighted the harmful impacts of the technology. This year, the UN Special Rapporteur on contemporary forms of racism recommended that Member States prohibit the use of technologies with a racially discriminatory impact. EDRi member Foxglove, working with the Joint Council on the Welfare of Immigrants took legal action and forced the Home Office to end its use of a racially discriminatory visa algorithm. There is a growing international consensus that racism perpetuated through technology must be halted with radical measures. The EU must follow suit.

Read more:

Structural Racism briefing https://edri.org/wp-content/uploads/2020/08/Structural-Racism-Digital-Rights-and-Technology_Final.pdf

AI recommendations paper https://edri.org/wp-content/uploads/2020/06/AI_EDRiRecommendations.pdf

Ban Biometrics Paper: https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf

Foxlgove: https://www.foxglove.org.uk/news/home-office-says-it-will-abandon-its-racist-visa-algorithm-nbsp-after-we-sued-them

PICUM and Statewatch (2019) “Data Protection, Immigration Enforcement and Fundamental Rights: What the EU’s Regulations on Interoperability Mean for People with Irregular Status“ https://www.statewatch.org/analyses/2019/data-protection-immigration-enforcement-and-fundamental-rights-what-the-eu-s-regulations-on-interoperability-mean-for-people-with-irregular-status/

(Contribution by Sarah Chander, Senior Policy Advisor, EDRi)