If AI is the problem, is debiasing the solution?

The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi's latest report "Beyond Debiasing: Regulating AI and its Inequalities", authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.

By EDRi · September 21, 2021

AI-driven systems have broad social and economic impacts, can lead to violations of human rights and exacerbate structural discrimination and inequalities. For the most part, regulators have responded to these concerns by narrowly focusing on the techno-centric solution of debiasing algorithms and datasets. By doing so, they risk creating a bigger problem for both AI governance and democracy because this approach squeezes complex socio-technical problems into the area of technical design – and thus into the hands of technology companies. By largely ignoring the costly production environments that machine learning requires, regulators encourage an expansionist model of computational infrastructures driven by Big Tech.

Debiasing locates the problems and solutions in algorithmic inputs and outputs, shifting political problems into the domain of design, dominated by commercial actors.”

– Agathe Balayn and Dr Seda Gürses

This new report commissioned by EDRi “Beyond Debiasing: Regulating AI and its inequalities” outlines the limits of technical debiasing measures as a solution to structural discrimination and inequality through AI systems. It shows the vast impact of AI-based systems on the governance, operations and financial stability of public sector organisations, further embedding the domiance of technology companies. The integration of everyday operations into current computational infrastructures could significantly transform, if not damage, the ability of public institutions to provide individuals with the necessary conditions to exercise their fundamental rights.

EU policy approaches to AI, discrimination and structural inequalities

The research finds that key European policy initatives lack genuine engagement with existing research, activism and technical understanding around AI and structural discrimination. The result is uncertainty in the scope of the problem to be addressed and the focus on inappropriate techno-centric solutions.

EU laws and policies put forward ‘debiasing’ as the primary means of addressing discrimination in AI, but fail to grasp the basics of debiasing approaches. Policy documents mistakenly suggest that mitigating biases in datasets guarantees that future systems based on these so-called ‘debiased datasets’ will be fair and respect the right to equality and non-discrimination.

The report concludes that policymakers do not provide sufficient guidance on debiasing requirements or how to address their techno-centric limitations. They also treat AI systems like a packaged product, pushing the complexities of AI production pipelines and the continuously evolving services they deliver outside of its regulatory scope. In sum, it is difficult to assess either the validity of the current policy focus on debiasing datasets or the future effectiveness of its application in regulating AI. 

Promoting debiasing as a silver bullet

Even if policymakers develop a better grasp of the technical methods of debiasing data or algorithms, debiasing approaches will not effectively address the discriminatory impact of AI systems. By design, debiasing approaches concentrate power in the hands of service providers, giving them (and not lawmakers) the discretion to decide what counts as discrimination, when it occurs and how to address it.

Debiasing approaches divert important political questions into the realm of the technical. For example, recent trends in machine learning applications, like the use of eugenics, phrenology and physiognomy and the use of reductionist proxies to represent categories like gender, race or sexuality, reflect implicit and socially unacceptable assumptions, and must be prohibited.

When regulators rely upon debiasing as a solution to AI discrimination and inequalities, they distract attention from the broader reordering of society brought about by AI-based systems. Given the limitations of debiasing techniques, policymakers should cease advocating debiasing as the sole response to discriminatory AI, instead promoting debiasing techniques only for the narrow applications for which they are suited. 

Beyond Debiasing: What’s missing from policy debates on AI and structural inequality?

The report unpacks the problematic assumptions about AI and offers an assessment of the limits of a focus on debiasing. The report puts forward alternative viewpoints that go beyond current techno-centric debates on data, algorithms and automated decision-making systems (ADMs). These frameworks outline different ways of analysing AI systems’ societal impact, yet are currently missing in policy debates on ‘bias’:

Aspects inherent to the fundamental principles of machine learning (such as the repetition of past data patterns, targeted inferences, inherent tendency to increase scale) are likely to pose harms which are often not considered in debiasing debates.

The focus on AI as a set ‘product’ obscures the complex processes by which AI systems are integrated into broader environments, which often create significant harms (such as labour exploitation, environmental extraction) often overlooked by policymakers.

The production and deployment of machine learning is heavily dependent on existing computational infrastructures in the hands of a few companies. Ownership over these computational resources is likely to lead to greater concentration of the technical, financial and political power of technology companies, exacerbating global concerns around political, economic and social inequalities.

AI based systems offer organisations the possibility to automate and centralise workflows and optimise institutional management and operations. These transformations are likely to bring about dependencies on third-parties and computational infrastructures, with demonstrable consequences for the structure of the public sector and democracy more generally.

Next steps

This report is a necessary contribution to the current policy debates, showing that some AI uses are simply too problematic to be ‘fixed’ with technical solutions. In April 2021, the European Commission proposed the Artificial Intelligence Act (AI Act), which echoed some of EDRi’s concerns on the harmful uses of AI but still offered vague prohibitions that cannot fully uphold people’s fundamental rights. Over the next few years, the European Parliament and the Council of the European Union will be negotiating the contents of the AI Act proposal before it becomes law. For further information, read EDRi’s recommendations on the Artificial Intelligence Act here.

Attempts to regulate AI must not rely on technical solutions to bias without addressing the underlying structural harms that AI can bring. Policymakers must be aware of the political, social and economic consequences of ‘solutions’ that concentrate power in the hands of technology companies.

In the cases where debiasing solutions are appropriate, policymakers must limit the discretion of private service providers to determine what constitute harm, rights violations, discrimination and inequality. Policymakers should support an effective, decentralised system of assessing AI systems, discrimination and inequalities. 

Fundamentally, debiasing solutions must not divert from political discussions around the role of AI in society. Regardless of technical peformance, some systems should simply not be used because they violate human rights, reinforce inequality and injustice, and undermine democracy. Policymakers should ban the deployment of AI services that exacerbate mass surveillance, structural inequalities, and reproduce biological essentialism.

EDRI’s campaign, Reclaim Your Face, is running an official European Citizens’ Initiative (ECI) to call on EU policymakers to ban biometric mass surveillance. Your voice can make a difference. Sign now to help us ban biometric mass surveillance practices and join the 59 000 ECI signatories in calling for strong AI red lines.

    1. Policymakers adopting technocentric approaches to address the discriminatory impact of AI must define problems clearly, set criteria for solutions, develop guidance on known limitations, and support further interdisciplinary research.
      1. Policymakers should engage with and learn from prior work on eliminating discrimination and inequalities as part of identifying problems to tackle.
      2. Policymakers should learn the basics of debiasing approaches.
      3. Policymakers should provide clearer guidance on applying debiasing and independent bias audits.
      4. Policymakers should demand an evaluation of system objectives as well as bias in its outcomes.
      5. Policymakers should support interdisciplinary research on holistic approaches to auditing AI systems for discriminatory effects. 
    2. AI policies must limit the discretion of AI service providers in addressing discrimination and inequalities.
      1. Policymakers should support an effective, decentralised system of assessing AI systems, discrimination and inequalities. 
      2. Policymakers should refocus the attention on bias and debiasing onto bias audits. 
      3. Policymakers should ensure that bias audits can be conducted independently.
      4.  Policymakers should set hard limits on access to sensitive data for bias auditing or debiasing.
      5. Policymakers should avoid increasing surveillance of minorities or vulnerable populations in the name of debiasing or bias auditing.
    3. AI regulation needs to go beyond ADMs, data and algorithms to include the spectrum of AI applications and the broader harms associated with the production and deployment of these systems.
      1. Policymakers should expand the evidentiary scope of harm to non-technical criteria.
      2. Policymakers should expand the scope of who (or what) may be classified as an affected party or AI subject and how they are harmed. 
      3. Policymakers should address distributed harms, exclusions and predatory inclusion through AI-based systems.
      4. Policymakers should ensure that auditing extends across the supply chain of AI production and captures the evolution of services.
      5. Policymakers should require that AI services available through application programming interfaces (APIs) are audited by service providers in the contexts in which they are deployed.
      6. Policymakers should bring harms accrued in the production of AI into the scope of regulations.
      7. Policymakers should ban the deployment of AI services that reproduce biological essentialisms and fascist, racist or supremacist conceptions of humans and societies.
    4. AI policies should empower individuals, communities and organisations to contest AI-based systems and to demand redress.
      1. Policymakers should enable the contestation and banning of harmful AI-based services.
      2. Policymakers should enable affected parties to trigger internal and independent audits.
      3. Policymakers should ensure that audits of AI systems include and empower affected parties.
    5. AI regulation cannot be divorced from the power of big tech companies to control computational infrastructures. Addressing the rise of this infrastructural power requires long-term strategy and planning.
      1. Policymakers should include within AI policy the broader impacts of the introduction of AI through computational infrastructures.
      2. Policymakers should invest in research on computational infrastructures.
    6. AI regulation should protect, empower and hold accountable organisations and public institutions as they adopt AI-based systems. 
      1. Policymakers should grant rights of redress to organisations that deploy or are affected by third-party AI services and depend on computational infrastructures.
      2. Policymakers should assess and build the capacity of public and private sector organisations to deploy AI while mitigating its broader harms and inequalities.In light of these shortcomings in AI policy-making, as well as other viewpoints presented above, the report makes six recommendations for policymakers, researchers, advocates and activists, and proposes some broader frames for engaging technology companies going forward.

* The research was completed by Agathe Balayn and Dr Seda Gürses of the Delft University of Technology, the Netherlands. The report was commissioned and reviewed by EDRi. This is not an EDRi position paper and does not necessarily reflect the stance of all EDRi members.

Sign the European Citizens’ Initiative (ECI)

If you're an EU citizen, you can help us change EU laws by signing the official #ReclaimYourFace initiative to ban biometric mass surveillance practices:

This is not a regular petition, but an official “European Citizens’ Initiative” (ECI) run by EDRi on behalf of the European Commission. This means your signature must be officially verified by national authorities, according to each EU country’s specific rules. We cannot control the data that they require, since it is required by Regulation (EU) 2019/788 on the European citizens’ initiative for the purpose of confirming your signature. We can only use the information that you provide in Step 2 to contact you with updates, if you choose to enter it. Furthermore, our ECI signature collection system has been verified by the German Federal Information Security Office (BSI) to ensure it is compliant with the EU’s Regulation on ECIs. Please see our “Why ECI?” page for further details, and check out our privacy policy.

This ECI is open to all EU citizens, even if you currently live outside the EU (although there are special rules for Germany). Unfortunately if you are not an EU national, the EU’s official rules say that you cannot sign. Check https://reclaimyourface.eu other ways than non-EU citizens can help the cause.

Note to German citizens: It is possible to sign our ECI petition if you live outside the EU, but German rules mean that for German citizens specifically, your signature will only be valid if you are registered with your current permanent residence at the relevant German diplomatic representation. If you are not registered, then unfortunately your signature will not be counted. You can read more information about the rules. This rule does not apply to citizens of any other EU country.

Legally, if we reach 1 million signatures (with minimum thresholds met in at least 7 EU countries) then the European Commission must meet with us to discuss our proposal for a new law. They must then issue a formal communication (a piece of EU soft law) explaining why they are or are not acting on our proposal, and they may also ask the European Parliament to open a debate on the topic. For these reasons, a European Citizens’ Initiative (ECI) is a powerful tool for getting our topic onto the EU agenda and showing wide public support for banning biometric mass surveillance practices.

Learn more about the campaign to ban biometric mass surveillance practices at our official website

Reclaim Your Face