EU’s new artificial intelligence law risks enabling Orwellian surveillance states

When analysing how AI systems might impact people of colour, migrants and other marginalised groups, context matters. Whilst AI developers may be able to predict and prevent some negative biases, for the most part, such systems will inevitably exacerbate injustice. This is because AI systems are deployed in a wider context of systematic discrimination and violence, particularly in the field of policing and migration.

By EDRi · May 5, 2021

Ever since the European Commission President Ursula von der Leyen made a promise to deliver a regulation for “cutting edge yet trustworthy” artificial intelligence (AI) in her first 100 days of office, the microscope of global tech policy has been fixed on the EU’s response.

Yesterday, more than a year since von der Leyen took office, the European Commission launched its proposal on AI.

The regulation is a mixed bag. Despite persistent lobbying from EU countries and industry to avoid mandatory rules on AI – the proposal takes a much stronger fundamental rights stance than many expected. By prohibiting AI for social scoring and some police uses of biometric systems the European Commission has paved the way for the argument that some uses of AI are simply too harmful to be allowed.

This admission is a glimmer of hope for civil society who have called for red lines on impermissible uses of AI, like facial recognition in public spaces and predictive policing.

Big brother?

But, a deeper read reveals that the European Commission’s proposed prohibitions are relatively feeble. For example, the draft law carves out hefty exemptions to the prohibition on law enforcement agencies using “real-time remote biometric identification systems” (like facial recognition) in public spaces.

Donate to EDRi to build a people-centered, democratic digital future. Donate Now

On announcing the draft law, the European Commission’s executive-vice president Margrethe Vestager stated “[t]here is no room for mass surveillance in our society.” Yet, the list of exceptions to the already narrow prohibition, as well as the fact that the ban does not apply whatsoever to companies and other areas of government, leaves ample room for the use of biometric technologies to watch and monitor us. For an industry premised on creepy mass surveillance, the AI law may not look so strict after all.

What’s more, the European Commission’s proposal risks giving a green light to governments or public authorities to deploy discriminatory surveillance systems. Its rules for “high risk” AI – like predictive policing, AI in asylum procedures and worker surveillance – fall mainly on the developers themselves, not the public institutions actually deploying them – a cause for concern.

This also means that, with the exception of some narrow uses of biometric identification and categorisation, conformity with the law for most “high risk” uses is assessed by the developers themselves. This will be a huge red flag for democratic oversight – can we rely on those who profit from AI systems to make a fair assessment?

Technologies of oppression

When analysing how AI systems might impact people of colour, migrants and other marginalised groups, context matters. Whilst AI developers may be able to predict and prevent some negative biases, for the most part, such systems will inevitably exacerbate injustice. This is because AI systems are deployed in a wider context of systematic discrimination and violence, particularly in the field of policing and migration.

It’s highly unlikely that the European Commission’s draft law on AI will put an end to the most harmful uses of AI. Most problematic uses – including AI that supposedly “detects lies” of migrants during the visa process, predictive policing systems which direct more policing to minority neighbourhoods and automated systems that purport to identify us by our gender identity or disability status – are likely to escape rigorous regulation if the rules stay as they are.

The clue is in the vested interest

There were a number of clues that the EU would not be willing to halt discriminatory surveillance technologies. Back in February 2020, the European Commission named “promoting the adoption of AI by the public sector” as a key objective, arguing that the rapid deployment of products and services that rely on AI was essential.

Particularly, in areas of law enforcement and migration control, there has been a clear “pro-AI” agenda. In the EU’s migration pact, the Commission proposed a feasibility study for the use of facial recognition, including on minors, as part of its “fresh start on migration”.

In other cases, the EU has directly funded experiments with discriminatory surveillance technologies. Over 4 million euros was spent between 2016 and 2019 on iBorderCtrl, the “intelligent” border processing system subjecting travellers to “non-invasive” lie detection using facial recognition. Elsewhere, the EU has funded research into “race classification” systems as part of their Horizon 2020 funding programme.

If the EU has already made up its mind about these systems, how can we expect the new law to limit the negative impact, particularly on marginalised groups? Falling short of legal prohibitions or “red lines” on state and commercial uses of AI that watch and discriminate against us, the AI law is unlikely to fully protect us.

For those most likely to feel the impact of discriminatory surveillance, this is simply not enough. Far from a “human-centred” approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states.

The op-ed was first published by Euronews here.

Image credit: Lorenzo Miola/ Fine Arts

Contribution by:

C

Sarah Chander

Senior Policy Advisor

Twitter: @sarahchander