Amnesty International calls to ban discriminatory algorithms in its report Xenophobic Machines
On 25 October 2021, EDRi observer Amnesty International published a report on the use of algorithmic decision-making (ADM) system by the Dutch tax authorities to detect fraud. The report shows how discrimination and racial profiling were baked into the design of the ADM system.
On 25 October 2021, Amnesty International published a report on the use of algorithmic decision-making (ADM) system by the Dutch tax authorities to detect fraud. The report shows how discrimination and racial profiling were baked into the design of the ADM system. Tens of thousands of parents and caregivers from mostly low-income families and immigrant backgrounds were falsely accused of fraud. While the Dutch government has announced a number of safeguards to prevent similar human rights violations from happening in the future, Amnesty’s analysis of these safeguards shows that they fall short on all fronts.
Social security agencies worldwide are increasingly automating their fraud- and crime “prediction” systems. The Netherlands is at the forefront of this development. In 2013, the Dutch tax authorities adopted an ADM system to create risk profiles of individuals applying for childcare benefits who were supposedly more likely to submit inaccurate applications and potentially commit fraud. Every parent or caregiver who legally resides in the Netherlands while working or studying is eligible for childcare benefits, a contribution towards the costs of childcare reimbursed by the Dutch government, when their child goes to a registered day-care centre. Selection by the ADM system could have far-reaching consequences. False accusations were common practices within the tax authorities. For example, after being selected, parents and caregivers were subjected to an investigation by a civil servant, and had their benefits suspended and could face heavy fines. Tens of thousands of parents and caregivers were falsely accused as fraudsters and faced devastating problems, ranging from debt and unemployment to forced evictions because they were unable to pay their rent or make payments on their mortgages. Others were left with mental health issues and stress on their personal relationships, leading to divorces and broken homes.
People of non-Dutch nationalities were more likely to be selected by the ADM system as the tax authorities used information on whether an applicant had the Dutch nationality as a risk factor leading to non-Dutch nationals receiving higher risk scores. This scoring reveals the assumptions held by the tax authorities that people of certain nationalities would be more likely to commit fraud or crime than people of other nationalities. The tax authorities also manually searched for all applicants holding a certain nationality after suspicions of fraud committed by some people with roots or links to that nationality. This indicates an acceptance of the practice of generalizing the behaviour of some individuals to all others who are perceived to share the same race or ethnicity. Under international human rights law, such a differential treatment based on national or ethnic origin is considered racial profiling.
The ADM system was semi-automated, meaning that when an individual was flagged as a fraud risk, a civil servant was required to conduct a manual review. However, the civil servant was given no information as to why the system had generated a higher-risk score. This meant that affected individuals who tried to find out what mistake they had made did not receive an answer, partly because the civil servants themselves had no idea. Such opaque black box systems, in which the inputs and calculations of the system are not visible, lead to an absence of accountability and oversight. In addition, due to the semi-automated character of the system parents and caregivers did not have the right to meaningful information about the logic of algorithms as, according to the General Data Protection Regulation, this right only applies in very limited circumstances to decisions made by fully automated systems. Furthermore, the algorithmic system had self-learning elements, giving the system the ability to learn from experiences over time, independently and autonomously, and to make changes to how it worked without these changes being explicitly programmed by the programmers from the tax authorities. This entails a significant risk that the deployment of the self-learning system amplifies intentional and unintentional biases, leading to the system producing results that are systemically prejudiced due to erroneous assumptions embedded in the self-learning process.
To prevent human rights violations in relation to the use of algorithmic systems from happening in the future, binding and enforceable measures and safeguards are necessary. Amnesty International is calling on governments to:
- Implement a mandatory and binding human rights impact assessment before the use of algorithmic systems.
- Create a maximum possible transparency regarding public sector use of algorithmic systems by creating a public registry including detailed and comprehensive information of public sector use of algorithmic systems.
- Be fully transparent and provide meaningful information to affected individuals about the underlying logic, importance and expected consequences of decisions, even if they are not fully automated, regardless of the level of human involvement in the decision-making process.
- Establish effective monitoring and oversight mechanisms for algorithmic systems in the public sector.
- Stop the use of black box systems and self-learning algorithms where the decision is likely to have a significant impact on the rights of individuals.
- Establish a clear, unambiguous and legally binding ban on the use of data regarding nationality and ethnicity, or proxies thereof, in risk-scoring for law enforcement purposes in search of potential crime or fraud suspects.
- Hold those responsible for violations to account and provide an effective remedy to individuals and groups whose rights have been violated.
Image credit: Markus Spiske / Unsplash
(Contribution by: Merel Koning, Senior policy advisor & Tamilla Abdul-Aliyeva, Senior Policy Officer Tech & Human Rights, EDRi observer, Amnesty International )
- Amnesty International: “Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal” (25.10.2021) Available in English and Dutch.