The EU should regulate AI on the basis of rights, not risks
EDRi's member Access Now explains why the upcoming legislative proposal on AI should be a rights-based law, like the GDPR. The European Commission must not compromise our rights by substituting a mere risk mitigation exercise by the very actors with a vested interest in rolling out this technology.
The article was first published here.
Contribution by: Fanny Hidvegi (Access Now’s Europe Policy Manager), Daniel Leufer (Access Now’s Europe Policy Analyst) and Estelle Massé (Access Now’s Senior Policy Analyst and Global Data Protection Lead)
In only a few months the European Commission is set to present a regulation on artificial intelligence (AI). Despite numerous statements from civil society and other actors about the dangers of applying a risk-based approach to regulating AI, the European Commission seems determined to go in this direction. The Commission already said in its 2020 White Paper on Artificial Intelligence that it wanted to apply a risk-based approach.
A risk-based approach involves determining the scale or scope of risks related to a concrete situation and a recognised threat. This approach is useful in technical environments where companies have to evaluate their own operational risks. However, the EU approach would have companies evaluate their operational risks vs. people’s fundamental rights. This is a fundamental misconception of what human rights are; they cannot be put in a balance with companies’ interests. Companies would also have an interest in downplaying the risks in order to develop products. A risk-based approach to regulation is therefore not adequate to protect human rights. Our rights are non-negotiable and they must be respected regardless of a risk level associated with external factors.
We have heard people make numerous references to how a risk-based approach would take its lead from the General Data Protection Regulation (GDPR). This is worrying precisely because the GDPR is not fundamentally a risk-based law, and the parts that deal with risk assessment have proven highly problematic.
As explained below, the upcoming legislative proposal on AI should be a rights-based law, like the GDPR, as this is the only way to ensure the protection of fundamental rights. The European Commission must not compromise our rights by substituting a mere risk mitigation exercise by the very actors with a vested interest in rolling out this technology.
Why the GDPR is not a risk-based law
Throughout the negotiations of the GDPR and since its adoption in 2016, many in industry have claimed that the law “enshrines” or “establishes” a risk-based approach to data protection. That is not the case.
The GDPR does include references to risks and defines the requirements for conducting a risk assessment in specific scenarios, such as when there is a data breach. However, the GDPR is based on rights and making them operational. It sets out a series of rights to enable us to control our information and then places obligations and requirements on companies and other entities that use our data, to protect those rights. In fact, during the negotiations of the GDPR, the Working Party 29 — which gathers all EU data protection authorities — published a statement on the risk-based approach to explain that it cannot replace company obligations to protect our rights:
“…the Working Party is concerned that both in relation to discussions on the new EU legal framework for data protection and more widely, the risk-based approach is being increasingly and wrongly presented as an alternative to well-established data protection rights and principles, rather than as a scalable and proportionate approach to compliance. The purpose of this statement is to set the record straight.”
The statement further clarifies that “rights granted to the data subject by EU law should be respected regardless of the level of the risks which the latter incur through the data processing involved”. The data protection authorities have not since changed their opinion on the matter. The guidelines the European Data Protection Board recently issued confirm that the risk-based approach under the GDPR is limited to a few articles and clarifies that other obligations continue to apply.
Why a risk-based approach won’t work for regulating AI
The development of AI and automated decision-making (ADM) systems poses great risks to fundamental rights. There is mounting evidence to show how these systems can violate our rights. We are seeing the use of live facial recognition systems in public spaces that amounts to mass surveillance; the deployment of publicly funded pseudoscientific AI “lie detector” systems along national borders; the use of biased, problematic ADM systems to detect welfare fraud; and even use of relatively basic ADM systems that entrench and amplify inequality in grading students.
The fact that AI systems can operate in unpredictable ways, and that systems that ostensibly perform “simple” or routine tasks can end up having unforeseen and often highly damaging consequences, deepens the risks. A good example is the use of automated gender recognition technology, where machine learning systems are trained to “detect” someone’s gender from facial or other characteristics. While such systems are widespread and usually considered trivial, they have been shown to systematically discriminate against trans and gender non-conforming people by either misgendering them or forcibly assigning them a gender identity that does not match their true gender identity.
From a fundamental rights perspective, these two considerations — the evidence that there is a threat to our rights, and that AI and ADM systems can act in unpredictable ways — should lead to a proposal for a proper human rights impact assessment by independent experts and regulators, both ex-ante and on a regular basis once the systems are in use. Unfortunately, that is not what the European Commission proposed in its communication on the upcoming AI regulations.
Rather than focusing on fundamental rights, the Commission’s stated approach to regulating AI has been to place innovation and increased AI uptake as its primary concern. Protecting rights is acknowledged as a secondary concern, with the worrying proviso that it must only be done in a manner that does not risk stifling innovation.
This insistence on protecting innovation and increasing AI uptake at all costs seems to be one of the main motivations behind the proposal to adopt a risk-based approach to regulation. The Commission’s White Paper on Artificial Intelligence lays the groundwork for this approach by proposing distinguishing between high- and low-risk AI systems. In addition, a study commissioned by the European Parliament’s Panel for the Future of Science and Technology (STOA), The impact of the General Data Protection Regulation (GDPR) on artificial intelligence, envisions a “risk-based approach, [which] rather than granting individual entitlements, focuses on creating a sustainable ecology of information, where harm is prevented by appropriate organisational and technological measures”.
How would such an approach work? The Commission’s White Paper proposes using two cumulative criteria to assess risk, “considering whether both the sector and the intended use involve significant risks, in particular from the viewpoint of protection of safety, consumer rights and fundamental rights” (p.17). This suggests that to trigger regulatory protections, an AI system would need to be used in a sector identified as high risk (such as health) and “used in such a manner that significant risks are likely to arise”. The White Paper leaves open the possibility that “the use of AI applications for certain purposes is to be considered as high-risk as such” and cites the use of AI in hiring and in remote biometric identification systems.
Why a rights-based approach is the way to go
In their response to the consultation on the Commission’s White Paper, Access Now outlined the many problems with the risk-based approach it proposes. One clear issue is that dangerous applications that fall through the cracks and are wrongly classified as “low risk” will not be subject to proper oversight and safeguards. Access Now, therefore, proposed that the burden of proof be on the entity wanting to develop or deploy the AI system to demonstrate that it does not violate human rights via a mandatory human rights impact assessment (HRIA). This requirement would be for all applications in all domains, and it should apply to both the public and private sector, as part of a broader due diligence framework.
Once such an impact assessment has been conducted, there could then be a role for some form of risk assessment to define the consequences of that assessment. Regardless, Access Now caution against taking a simplistic high-low risk binary approach, and further encourage the Commission to develop clear and coherent criteria to determine when an AI or ADM system has a significant effect on an individual, a specific group, or society at large.
Most importantly, however, the Commission needs to account for the fact that even a “high-risk” classification will be inadequate for the danger posed by some uses of AI. With applications of AI such as remote biometric identification, automated gender “detection”, and other forms of behavioural prediction, it is not a question of whether these applications of AI pose a “risk” to human rights; they are fundamentally incompatible with the enjoyment and protection of human rights. Framing the problem in terms of risk encourages the idea that we could introduce safeguards or ethical guidelines that would “lower the risk”. In reality, some systems inherently undermine our rights and our dignity in a manner that cannot be mitigated.
It’s time for a rights-based approach with a ban on dangerous uses of AI
Together with 61 other civil society organisations in Europe, we have called for the Commission to ensure that its regulatory proposal on AI makes provision for “red lines” or prohibitions on certain applications, such as those that would deepen structural discrimination, exclusion, and collective harms, or those that serve to restrict or grant discriminatory access to vital services such as health care and social security. Perhaps most urgently, our coalition is calling for a ban on biometric mass surveillance.
We believe that the European Union can only live up to its promise of promoting “trustworthy AI” if it takes a rights-based approach and bans outright those applications of AI that are inherently incompatible with fundamental rights.