Attention EU regulators: we need more than AI “ethics” to keep us safe
In this post, Access Now and European Digital Rights (EDRi) analyse recent developments in the EU AI debate and explain why we need a bold, bright-line approach that prioritises our fundamental rights.
The last few months have been a tumultuous ride for those of us concerned about ensuring our fundamental rights are protected as the EU develops its artificial intelligence (AI) policy. While EU Commission Vice Presidents Vestager and Jourová recently committed to “proactively” combating the threat of AI systems reinforcing structural discrimination, other developments have been less promising. A number of Member States have taken a strong anti-regulation position in the debate, and yesterday the European Parliament voted on a number of reports related to AI, including one report on AI and ethics, that leave a lot to be desired. With the Commission gearing up to propose new legislation early next year, there is a lot at stake.
Here we take a closer look at these developments, and explain what EU regulators must do to safeguard our rights.
Member States tell the EU not to burden companies with AI regulations
In a position paper led by Denmark, 14 Member States (including France and Poland) have called on the EU to “avoid setting burdensome barriers and requirements which can be a hindrance for innovation”. They caution against over-regulation and plead for an approach that puts innovation front and centre.
One of the key worries they cite is that the European Commission’s proposed risk-based approach to regulating AI will end up classifying too many AI systems as high-risk. They argue for an “objective methodology” in assessing the risk of such systems, and suggest the risk-classification “should make the category of high-risk AI the exception rather than the rule”.
While we agree that an objective methodology is essential the ultimate goal of such a methodology should not be to limit the number of AI systems classified as high risk. The reason we need to identify risks at all is to better protect our rights — not to make things easier for companies at any cost.
In our submissions to the consultation on the Commission’s White Paper on AI, both EDRi and Access Now highlighted weaknesses in the Commission’s risk-based approach. We point out that the burden of proof to demonstrate that an AI system does not violate human rights should be on the entity that develops or deploys the system, and that such proof should be established through a mandatory human rights impact assessment. That goes for all applications of AI, in every domain, and it must apply both to the public and private sectors, as part of a broader due diligence framework. Moreover, if a human rights impact or risk assessment is carried out, it must be made publicly accessible and open to challenge. Civil society organisations and the people impacted by AI systems must have the capacity to contest an assessment that is not correct.
The 14 Member States also claim that with appropriate measures in place, “European businesses would be able to distinguish themselves from the global competitors as the trustworthy alternative in times of digital scandals and increasing data collection”. The idea that these businesses will achieve trustworthiness without the EU ensuring some level of “barriers and requirements” for those developing or deploying AI systems is highly questionable. In the current regulatory environment, individuals or groups who are facing a threat or violation of their rights have to show how they are being harmed, which can be extremely difficult and burdensome. If you decide not to attend a protest because your city is using live AI-powered facial recognition technology, how would you quantify your loss of freedom? We can better prevent abuse of our rights — and the “digital scandals” that follow — by enacting strong regulations that protect them.
While the signatories of the position acknowledge the risks posed by AI systems, they consistently put forward the wrong solutions: soft law instruments advanced upon the baseless conviction that unfettered innovation will always lead to societal benefits. In 2020 we should finally put to rest the idea that innovation is an unequivocal good in itself and that we can rely on tech companies to self-regulate. Some kinds of innovation threaten and undermine fundamental rights but promise huge profits to the companies behind them. Only by enforcing legal obligations on those developing or deploying AI systems will we succeed in protecting people from AI-driven mass surveillance, predictive policing, and other harmful “innovations”.
The European Parliament’s framework for ethical AI fails to draw a single red line
Just when we thought the AI regulatory debate had finally shifted from ethics to fundamental rights, the European Parliament has made a disappointing move, adopting a Report on “A Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies”. The authors of this Report mistakenly assume that AI ethics principles will be sufficient to prevent harm and mitigate risks. They are cautious and restrained on fundamental rights, taking only tentative steps to outline the biggest threats that artificial intelligence pose to people and society, while also failing to propose a legislative framework that would address these threats or provide any substantive protections for people’s rights. They draw absolutely no red lines for even the most harmful of uses of AI that are recognised across civil society globally.
There are some broad principles in the Report that should be included in the upcoming regulation, such as, for example, the principle that AI should respect human agency and democratic oversight. The Report also offers recommendations related to transparency, such as a proposed requirement that developers and deployers of high-risk technologies provide documentation to public authorities on their use and design and, “when strictly necessary”, “source code, development tools and data used by the system“. However, these measures fall short of the transparency requirements which civil society have called for, including the joint recommendation of Access Now and AlgorithmWatch to establish public registers for AI systems used in the public sector. Moreover, any transparency requirements that focus only on allegedly “high-risk” systems leave a gap in oversight of systems that do not fall into this category but still threaten our rights.
There is a slight nod to the structural power differentials that many automated decision-making and data-driven systems enable, as the Report underscores the need for “personal data [to be] protected adequately, especially data on, or stemming from, vulnerable groups, such as people with disabilities, patients, children, the elderly, minorities, migrants and other groups at risk of exclusion”. Yet despite that nod, the Report vastly underestimates the need for strong regulations to prevent AI systems from reinforcing and amplifying structural discrimination. It suggests we can effectively counter such discrimination by “encouraging the de-biasing of datasets”, implying there is a technical “fix” when what we need to protect marginalised groups are strict regulatory requirements.
Finally, in addressing the use of AI for biometric identification, such as through facial recognition— a use that EDRi and Access Now have shown to enable unlawful mass surveillance — the Report fails spectacularly to follow its own logic. The Report emphasises the need for proportionality and substantial public interest to justify any biometric processing. Yet it does not conclude that some uses of AI, such as use for indiscriminate or arbitrary surveillance of people’s sensitive biometric data in public spaces, are fundamentally and inherently disproportionate.
Of small comfort is the fact that the Report does not rule out a ban of some uses of AI, and can foresee “mandatory measures to prevent practices that would undoubtedly undermine fundamental rights” — while still cautioning against over-regulation.
The European Parliament’s laissez-faire approach to preventing the use of AI to infringe on our rights to express ourselves, to protest, and to be free from discrimination, comes at a time when current rules to prevent violation of these rights are not being systematically enforced. The effective enforcement of the EU’s data protection rules is a key to put a stop to the deployment of systems for biometric mass surveillance in Member States across Europe. We should be addressing that problem, not adding to it.
AI’s broken promises: instead of seeking evidence of harm, demand evidence of benefits
Those arguing that we safeguard innovation at any cost often present an exhaustive list of promises and benefits that AI could bring to society. In the Parliament Report, for example, we see claims that AI can guarantee the security of individuals in national emergencies, ensure the access of people with disabilities to public services, help reduce our carbon footprint, and even contribute to reducing social inequalities.
Big claims. Are we to accept them uncritically, at face value? Should they form the basis for a regulation with such significant implications for the rights of so many people, including those in targeted or marginalised groups – such as people in migrant communities, LGBTQ people, people with disabilities, or people living in poverty – who will experience the deepest negative impact of AI-enabled rights violations?
Earlier this year, EDRi members argued that “nobody’s rights should be sacrificed at the altar of innovation.” Instead, perhaps we should reverse the burden of proof. Computer science researcher Abeba Birhane tweeted last week, “the default thinking… needs to be all tech is harmful/ has negative consequences until it is critically examined and proven otherwise.” As it stands, we ask the people whose rights are threatened to show how they are being harmed, yet the companies making the technology can easily hide behind claims of protecting “innovation” when we seek the transparency to understand how these AI systems actually work. This has to change.
How the EU can protect our rights
The upcoming regulation on AI should reflect appropriate skepticism of its so-called benefits, and put in place a framework to prevent use of AI from violating our rights and harming our societies, with the protection of our rights taking priority over other considerations. To do this, regulators should:
• develop a legal framework which effectively prohibits systems that by their very nature will be used to infringe fundamental and collective rights;
• incorporate mandatory, publicly accessible, and contestable human rights impact assessments for all uses of AI to determine the appropriate safeguards, including the potential for prohibiting uses that infringe on fundamental rights; and
• complement these efforts with stronger enforcement of existing data protection and other fundamental rights laws
Everyone participating in the debate over regulating AI technologies acknowledges the risks they pose. We already understand that uses of AI can facilitate discrimination and enable and encourage mass surveillance of our most sensitive personal and biometric data, all with an opacity that frustrates democratic oversight. What is still lacking is acknowledgment that those risks cannot be mitigated using only soft law approaches, voluntary certification schemes, and self-regulation by companies. In a political climate centering industry’s demands for these approaches, we need a bold political response that prioritises fundamental rights by laying down red lines and strict obligations. If the EU truly values our rights, there can be no other option.
Contribution by: