EU’s AI law needs major changes to prevent discrimination and mass surveillance

The European Commission has just launched the its proposed regulation on artificial intelligence (AI). As governments and companies continue to use AI in ways that lead to discrimination and surveillance, the proposed law must go much further to protect people and their rights. Here’s a deeper analysis from the EDRi network, including some initial recommendations for change.

The proposal to regulate AI is a globally significant step. Uses of AI systems have the ability to enable mass surveillance and intrusion into our personal lives, reinforce some of the deepest societal inequalities, fundamentally alter the delivery of public and essential services, shift more power into corporate hands and disrupt the democratic process.

Whilst the European Commission’s proposal, also known as the Artificial Intelligence Act (AIA) opens the door to some potentially effective ways to curtail some of the most harmful impacts of AI on people and society, it simply does not go far enough to fully protect people from these harms. As it stands, the proposal centres on on facilitating the exponential development and deployment of AI in the EU, making only minor concessions to fundamental rights.

Important recognition of “unacceptable AI”

The proposal takes a notable step to acknowledge that some uses of AI are simply unacceptable and must be prohibited. However, many of the prohibitions are overly narrowly defined, risking the introduction of unclear and unjustifiably high thresholds for systems to be included, such as causing “physical or psychological harm”. In practice, this could make it very hard to prove that harmful uses of AI have violated these prohibitions.

“We are pleased to see the European Commission is including prohibitions, however the ethical bar is extremely low. Exploiting vulnerabilities of persons due to age, physical or mental disabilities, should be prohibited regardless the question whether actual physical or psychological harm is caused.”

– Nadia Benaissa, Bits of Freedom

In addition, the draft law does not prohibit the full extent of unacceptable uses of AI highlighted by EDRi and a diverse coalition of human rights organisations. In particular, predictive policing, uses of AI at the border, automated recognition of sensitive identity traits (like race, gender identity, disability), uses of AI to determine access to essential public services, and many uses of biometric systems in ways that will lead to mass surveillance remain permissible, yet subject to safeguards.

This leaves a worrying gap for many discriminatory and surveillance practices used by governments and companies, often with extremely harmful consequences for people.

Moving forward, legislators must focus on preventing harms and clarifying the legal limits on unacceptable uses of AI. Article 5 of the AIA should therefore be expanded to include the full scope of unacceptable practices uncovered by civil society. Legislators should strive for wide consultation with civil society and affected communities to set these red-lines.

Wide gaps for biometric mass surveillance systems

The proposal takes a small step towards banning practices that amount to biometric mass surveillance – as called for by EDRi, the Reclaim Your Face campaign, and a wide range of civil society groups. Under article 5(1)(d) of the current proposal, some specific uses of biometric technologiesare prohibited when deployed by law enforcement (such as police using live facial recognition cameras against people protesting).

However, the approach to biometric identification in other cases reveals a failure to stop applications of AI which use our faces, bodies and behaviours against us. Firstly, the law enforcement ban is subject to wide exceptions which will be ripe for abuse – little better than the alarming situation we see today. Secondly, the ban does not apply to other authorities (e.g. schools, local governments) or private companies (e.g. supermarkets, transport companies) despite evidence that these actors already undertake biometric mass surveillance. And third, the ban only applies to “real-time” uses, which leaves a big gap for equally harmful use cases like police monitoring people through controversial ClearviewAI software.

The proposal also fails to protect people from other harmful biometric methods (such as categorisation), despite the significant threats posed to people’s dignity and right to non-discrimination. Such practices can perpetuate harmful stereotypes by putting people into boxes of “male” and “female” based on a biometric analysis of their facial features, or making guesses about people’s future behaviour based on predictions about their race orethnicity.

“The proposal’s treatment of ‘biometric categorisation,’ defined as assigning people to categories based on their biometric data is very problematic. It lumps together categories such as hair or eye colour, which can indeed be inferred from biometrics, with others like sexual or political orientation, that absolutely cannot. Inferring someone’s political orientation from the shape of their facial features is a revival of 19th century physiognomy, a dangerous, discredited and racist pseudoscience. Applications like this should be banned outright, but are currently only assigned minimal transparency obligations.”

– Daniel Leufer – Access Now

Whilst the inclusion of rules to ensure that remote biometric identification (RBI) systems cannot be self-certified by whoever has developed them, and the fact that the exceptions to the ban on law enforcement uses don’t apply unless each country specifically puts them into their national law, the AIA nevertheless falls seriously short of safeguarding people’s faces and public spaces. Shortly after the proposal launched, the European Commission’s own supervisory authority, the EDPS, strongly criticised the failure to sufficiently address the risks posed by RBI, urging that: “A stricter approach is necessary given that remote biometric identification […] presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives.”

Legislators must reject the idea that some forms of remote biometric identification are permissible whilst others are impermissible; and instead implement a full prohibition on all forms of biometric mass surveillance practices in publicly accessible spaces by all public authorities and private actors.

A compliance bureaucracy focused on AI developers

The regulation allows a very wide scope for self-regulation by companies developing “high risk” AI. For the majority of high-risk AI uses contained in Annex III, the rules in article 43(2) mean that compliance with the regulation’s requirements is primarily ensured through self assessment by the providers themselves. Very worryingly, it will be for AI providers (those with a financial interest in securing compliance and without the expertise to assess the implications on people’s rights) themselves to judge if they have sufficiently met the requirements set out on data governance, transparency, accuracy, and more.

The Commission’s framework is likely to become a superficial “box ticking” exercise, rather than a way to ensure real accountability of AI systems that pose a high risk for fundamental rights. Instead of relying on purely technical self-assessments, we need effective, independent oversight, which takes into account human rights impact and involves people affected by AI and civil society.”

– Karolina Iwańska – Panoptykon Foundation

Legislators must ensure that high risk AI systems are truly treated as such. This means incorporating human rights impact assessments as a key feature in the regulatory framework, and ensuring that compliance with all obligations is undertaken by an independent third party before AI systems are put into use.

Enabling discriminatory uses of AI

Whilst the proposal recognises how AI systems perpetuate historical patterns of discrimination and purports to ‘minimise the risk of algoirthmic discrimination’, the response severely underestimates the causes and extent of discrimination through AI.

The proposal focuses on data quality obligations on providers of “high risk” AI systems. However, for many of the applications listed in annex III, whilst AI developers may be able to predict and prevent some negative biases, for the most part, such systems will inevitably exacerbate structural inequalities. This is because AI systems are deployed in a wider context of structural discrimination. By relying on technical checks for bias as a response to discrimination, the proposal risks reinforcing a harmful suggestion that removing bias from such systems is even possible.

“When it comes to the use of AI risk assessment tools in the law enforcement context, such systems base their risk scores on an immense amount of personal data. Such a vast collection of personal data for the purpose of assessing the risk of re-offending will amount to a serious interference with the rights to privacy and data protection of the individuals concerned. Finally, there is a very serious risk in presenting these tools as ‘race’ neutral, independent of bias and capable of adequately predicting risks of re-offending behaviour.

– Eleftherios Chelioudakis – Homo Digitalis

In the field of security, whilst the proposal recognises that AI used in law enforcement and migration control operate in a context of significant power imbalance and vulnerability, the proposal itself does not adequately mitigate the vast sale of potential harm likely to be inflicted on members of marginalised groups. Most uses of AI in these fields, such as the use of AI systems to predict the occurrence of crime or assess people for “risk” in the immigration contexts, are not prohibited. Rather, they are characterised as high risk subject to self-assessed conformity checks, as well as supervision by the law enforcement and immigration bodies at national level. This response is severely inadequate for uses of AI in migraton control and law enforcement that by nature exacerbate structural discrimination and inequality.

By allowing border tech experiments to continue, the EU’s AI proposal shows a profound lack of intersectional engagement with the historical ways that technologies perpetuate systemic racism and discrimination, particularly against people on the move and communities crossing borders.”

– Petra Molnar – former Mozilla Fellow, EDRi

In addition to expanding the scope of Article 5 AIA to include inherently discriminatory uses of AI as demonstrated by civil society, legislators must ensure that the regulation is not obscured by ‘de-biasing’ approaches geared toward improving accuracy of inherently surveilling and discriminatory practices.

Stopping short of meaningful transparency

Whilst the regulatory proposal outlines some welcome obligations toward people subject to or interacting with AI systems such as emotion recognition, biometric catgeorisation (the use of AI to ‘identify’ sensitive identify traits like race, gender identity, disability) and deepfakes (Article 52), for the majority of high risk AI systems this transparency is limited in scope.

The regulation (article 13) imposes limited transparency obligations on providers toward users, as opposed to transparency requirements not directly to people affected by or subject to AI systems. As such, the proposal will have a limited effect on people’s ability to understand and challenge harmful and opaque AI systems deployed against them.

“Making AI transparent is key to ensuring it can be trusted by everyone subject to it. Information about data used by the AI, implementation details and how it is evaluated must be provided. Moreover, predictions must also be made interpretable so that one can understand what led to them. Free Software facilitates this because if AI is released under a Free Software license, everyone is able to inspect it, understand how it is made, learn more about the data that went through it and make it interpretable.”

– Vincent Lequertier – Free Software Foundation Europe

In addition, whilst the inclusion of an EU database of high risk AI systems as outlined in article 60 is welcomed, currently the provision focuses on registration of high risk applictations being put on the EU market. Full public transparency necessitates that this database registers high risk systems being put into use, including details on which actors are deploying them and for which purpose.

Transparency is a means to the end of full accountability of AI systems, in particular to those subject to their decisions and predictions. When amending the AI regulation legislators must take steps to ensure full and meaningful transparency of all high risk systems in use.

A regulation for companies, not people

The Commission’s proposal mostly governs what AI systems can be put on the market, focusing on the relationship between those developing and those deploying AI (which the proposal calls “users”). The majority of obligations fall on the provider (the developer of the AI system). The proposal outlines minimal direct obligations between the user toward those subject to, or affected by, AI systems.

In particular, the proposal does not outline any mechanisms by which those harmed by AI systems may seek recourse and redress from the user of AI systems (with the exception of article 52). This means that not only is the power imbalance between governments and people widened, but furthermore that AI providers – likely to be private companies – gain an undemocratic degree of power over public authorities to set the rules for how those authorities must use AI systems.

Despite some strong fundamental rights language, the Commission’s draft law is primarily geared toward facilitating and promoting AI uptake. Demonstrated by the provider focused compliance mechanisms as well as the regulatory sandboxing provisions (see articles 53-55), the proposal leaves a deliberately wide lee-way for ever-greater uptake of AI in all areas of public life, with unjustified assumptions that these benefit people in all cases. Yet, as civil society have repeatedly demonstrated, a generalised push for AI uptake is incompatible with a fundamental rights based approach.

Fundamental changes are needed for a truly people-centred AI law. Legislators must ensure the inclusion of direct obligations of users to people harmed by AI systems, including mechanisms for colletive redress similar to those found in the General Data Protection Regulation (GDPR). In addition, future processes to determine prohibitions and “high risk” applications must be democratic and inclusive, with a dedicated method of engagement for civil society and affected communities.

(Image credit: Lorenzo Miola/ Fine Arts)

Sarah Chander

Sarah Chander

Senior Policy Advisor

Ella Jakubowska

Policy and Campaigns Officer