Artificial intelligence – a tool of austerity
This week Human Rights Watch published a much-needed comment on the EU’s Artificial Intelligence Regulation. As governments increasingly resort to AI systems to administer social security and public services more broadly, there is an ever-greater need to analyse the impact on fundamental rights and the broader public interest.
AI in social welfare
Drawing on case studies of AI systems used in a context of social security in Ireland, France, the Netherlands, Austria, Poland, and the United Kingdom, the report from Human Rights Watch notes a marked trend toward algorithmic systems to allocate social security support and predict and prevent benefits fraud.
For example, in the Netherlands, the government famously rolled out SyRI in predominantly low-income neighbourhoods, in an effort to predict people’s likelihood of engaging in benefits or tax fraud. Whilst the court struck down the system for privacy reasons, the vast scale of the harm inflicted on poor and working-class people, largely from racialsied and migrant communities.
In Austria, as highlighted by EDRi member epicenter.works the AMS algorithm uses an algorithm to predict a job seeker’s employment prospects based on factors such as gender, age group, citizenship, health, occupation, and work experience. Prioritising services based on this information, the AMS algorithm has resulted in reduced support to job seekers with low and high employment prospects, and also discriminated against women over 30, women with childcare obligations and migrants.
Often introduced as cost-cutting, efficiency measures with a neutral impact on peoples’ rights, the case studies explored point to a number of negative impacts on already marginalised individuals and groups in society.
In particular, the report finds that this trend toward automation can ‘discriminate against people who need social security support, compromise their privacy, and make it harder for them to qualify for government assistance. But the regulation will do little to prevent or rectify these harms.’
Implications for the EU Artificial Intelligence Act
How far does the EU’s proposed AI act address these concerns? According to Human Rights Watch., much more needs to be done and the AIA must be amended.
As argued by EDRi, the report argues that EU’s proposal is a weak defence against these dangers. Whilst proposing to ban a narrow list of AI systems to be used by public authorities to measure people’s “trustworthiness” and single them out for “detrimental or unfavourable treatment” based on these scores, there is a lack of clarity as to whether this vague language would prevent harmful and discriminatory surveillance of poor people in social security.
To change this, Human Rights Watch recommend:
- Amending the regulation to ban social scoring that unduly interferes with human rights, including the rights to social security, an adequate standard of living, privacy, and non-discrimination.
- Include a process to prohibit future artificial intelligence developments that pose “unacceptable risk” to rights.
- Require users of automated welfare systems to conduct human rights impact assessments, especially before the systems are deployed and whenever they are significantly changed.