The secret services’ reign of confusion, rogue mayors, racist tech and algorithm oversight (or not)
Have a quick read through January’s most interesting developments at the intersection of human rights and technology from the Netherlands.
Who does the algorithm supervisor serve?
Supervision of the use of algorithmic systems, and mandatory human rights impact assessments were central to the electoral pact Bits of Freedom, Amnesty and others persuaded nine Dutch coalition- and opposition parties to sign in the run-up to the 2021 elections. Eventually, they even made their way into the coalition agreement. It goes without saying that we’re pleased to see these plans coming to fruition.
However, we’re also weary of how the Dutch Data Protection Authority (DPA) has decided to interpret its role as algorithm supervisor. In its new role, the DPA will identify cross-sectoral risks, share knowledge, improve collaboration, contribute to joint standard-setting and create an overview of applicable rules. It won’t handle citizens’ complaints or carry out any kind of enforcement activities.
We can’t help but worry that the DPA has carried over its closed-door policy to its new role. This makes us wonder that if citizens cannot turn to the supervisor, who is the supervisor for?
Cabinet continues to chip away at secret service supervision and safeguards
For eight years, the secret services – and the cabinet members responsible for them – have been holding citizens and Parliament captive in a semantic tug-of-war.
Confusion and exhaustion are central to a strategy that has led the secret services to be given more and more power with fewer safeguards, despite strong objections from citizens, NGO’s and Parliament. Since the new Dutch Intelligence and Security Services Act (Wiv 2017) went into effect in 2017, we’ve witnessed one legislative amendment realised and one announced, a “temporary arrangement”, a new “temporary” bill and, this month, a proposal for an amendment to that temporary bill. This means that currently Members of Parliament are being asked to discuss a proposal for a supplement to a law, of which both the supplement and the law will soon be amended.
Bits of Freedom responded to the consultation (read more here), pointing out just how the supplement will lead to diminished supervision and safeguards and is far from “temporary”, and calling for due process.
Racist tech and the AI Act discussed during Parliamentary round table
After a few delays, Parliament finally got around to organising a round table on the opportunities and risks of artificial intelligence (AI). Two contributions stood out.
Quirine Eijkman of the Netherlands Institute for Human Rights spoke about Robin Pocornie’s case against the discriminatory use of proctoring software by VU University Amsterdam. During Covid, the university made students use Proctorio when sitting for exams. One of those students, Pocornie, noticed the software often failed to properly “process” her face.
As a result, Pocornie struggled to access her exams, and even resorted to sitting through them with a bright light shining on her face. Supported by our friends at the Racism and Technology Center (“Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)”), the VU-student filed a complaint with the Netherlands Institute for Human Rights. At the end of 2022, the Institute ruled she had presented sufficient facts for a presumption of discrimination, and it was now on VU University to prove otherwise. As public institutions are increasingly making use of algorithms, and governments around the world are deciding on the rules to govern them, it’s hard to overstate the importance of cases like this.
The second contribution came from Bits of Freedom’s Nadia Benaissa, and focused in particular on the opportunities for raising protections in the AI Act. Benaissa relayed Bits of Freedom’s position on the right to transparency, explainability and verifiability. She also pointed to how the current text falls short in safeguarding people’s right to effective legal protection. Finally, she reiterated the importance of a broad ban on biometric identification (which we also spoke about at the European Parliament), which includes not just identification taking place in “real time” and applies to private actors as well as enforcement agencies.
On January 18, the New York Times (“How the Netherlands Is Taming Big Tech”) reported on Dutch privacy activists and advisors spurring major, global changes at Google, Microsoft and Zoom. Interviewing Bits of Freedom co-founder Sjoera Nas, among others, the article identifies how audits of digital services commissioned by the government, have led to Big Tech companies making “major privacy changes”, the effects being felt around the globe.
In more local news, mayors have been testing the limits of their powers by issuing online exclusion orders, claiming chat- and social media spaces constitute “online public spaces” and, when something happens there pertaining to their jurisdiction (like someone calling for a demonstration), they therefore have the power to intervene. In 2021, the Mayor of Utrecht issued an order banning a Utrecht resident from Telegram – it is currently being challenged in court.
Contribution by: Evelyn Austin, Executive Director, EDRi member, Bits of Freedom