Blogs | EDRi-gram | Open internet and inclusive technology | Artificial intelligence (AI) | Data protection standards | Inclusive technologies | Profiling practices | Transparency

We want more than “symbolic” gestures in response to discriminatory algorithms

In an escalating scandal over child benefits, over 26.000 families were wrongly accused of fraud by the Dutch tax authority. Families were forced to repay tens of thousands of euros, resulting in unemployment, divorces, and families losing their homes. EDRi member Bits of Freedom reveals the discriminatory algorithms used by the authority and urges the government to ban their use and develop legislation on Artificial Intelligence.

By Bits of Freedom (guest author) · February 10, 2021

Designed by: Digital Freedom Fund

Many victims were people of color. People who requested insight into the decision making process leading to the wrongful accusations, had their dossiers “disclosed” to them, but were confronted with page after page of entirely redacted text. This once more increased the distrust in the tax authority and became a symbol of the lack of government transparency.

After repeatedly denying any wrongdoing, the tax authority finally admitted it had used dual nationality as a marker to determine whether someone was likely to commit fraud. Lawyers representing families say some felt they were targeted because of ‘their foreign looking names’. An investigating committee issued a parliamentary report ‘Unprecedented Injustice’, which concluded that ‘fundamental principles of the rule of law were violated’. This report led to the resignation of the Dutch government, which claimed to be taking its ‘political responsibility’. With the general elections upcoming in March this year, many see this as a merely symbolic gesture.

The tax authority used self-learning algorithms to profile residents. The choice of using discriminating identifiers, however, was a human call. As the impact of the child benefits scandal is becoming more and more clear, politicians are in agreement that many mistakes were made within ‘the system’ and that ‘institutional racism’ should be condemned. Bits of Freedom agrees that the Dutch system is flawed in many ways. However, the system doesn’t operate of its own accord. ‘The system’, and the algorithms that were used, were programmed by humans and reflect political and policy decisions. Therefore, in order to prevent a similar disasters, we need more than a political gesture: we need accountability.

The Netherlands Court of Audit stated that there are many problems with the government’s use of algorithms and warns about the risk of discrimination and a lack of ethical guidance. Just this week another investigation was published, this time on municipalities’ use of algorithms. Several municipalities currently use algorithms to investigate social assistance fraud. This raises concerns, as the municipalities seem to be ignoring all warning signs and lessons-learned from the child benefits scandal.

Bits of Freedom urges the government to prohibit the use of discriminatory algorithms and develop legislation on Artificial Intelligence. With EDRi, Bits of Freedom calls for the introduction of red lines in the European Commission’s proposal on Artificial Intelligence.

Read the blog in Dutch here.