Building a coalition for Digital Dignity

In 2020 EDRi started to build the ‘Digital Dignity Coalition’, a group of organisations and activists active at the EU level dedicated to upholding rights in digital spaces and resisting harmful uses of technology. We’ve been organising to understand and resist how technological practices differentiate, target and experiment on communities at the margins - this article sets out what we’ve done so far.

By EDRi · September 22, 2021

How can we live in dignity when technological ‘progress’ puts us at risk?
Are technologies really neutral when they benefit some but marginalise others?
How do we get justice when technology is used to reinforce power imbalances? 
How can we mobilise to resist discrimination and harms through technology and online?

An EU level coalition for dignity and justice in a digital world

These are the questions that underpin the emergent work of EDRi and 25+ human rights and social justice groups around the concept of “digital dignity”. The Digital Dignity Coalition has been meeting since late 2020 to collectively learn about and contest the practices and structures that hardwire discrimination into existing and emerging technologies and digital spaces. The coalition is the reflection at EU level of our joint work with the Digital Freedom Fund, such as the Digital Rights for All project.

So far, the Digital Dignity coalition has brought together the perspectives, experiences and rights of racialised people, queer and trans people, migrants, undocumented people, sex workers, environmental activists, women, homeless people and more. Our aim has been to explore the increasing digitalisation of society from the perspective of the communities most affected.

We’ve started to consider a range of artificial intelligence (AI) issues, including predictive policing, uses of facial recognition in public spaces (a form of biometric mass surveillance) and ‘smart’ border controls. From this, we see that – time and time again – historically marginalised communities bear the brunt for dangerously ill-conceived, or even deliberately harmful, systems. Other areas of our work, like challenging manipulative surveillance advertising, checking the power of online platforms, and ensuring accountability of law enforcement uses of data, are also relevant to the concept of digital dignity.

How is dignity impacted by the use of digital technologies?

Whilst dignity is often thought of as one of the most difficult rights to understand and apply, it can give us a useful lens to interrogate the human rights impact of digital technologies from a holistic perspective.

The European Union Charter of Fundamental Rights says that “Human dignity is inviolable. It must be respected and protected.” Dignity is often thought of as the foundation for all other human rights, and it is an important component of protecting our autonomy, our freedom, our self-respect and self-determination, others’ respect for us and of course, our rights to live equally, without being discriminated against, and with equitable access to justice.

Developments in digital technology increasingly impact human dignity. Data about us can have a really profound impact on the ways in which we live our lives, whether we feel that our bodies and even our very existence are being treated with respect, and whether the unique differences that make each of us “us” are being taken into account in the development of new tech (spoiler alert: usually, the answer is no).

Many of the new technologies that have appeared on the European market in recent years have not been accessible for people with disabilities. The development of new AI systems have targeted marginalised communities for experimentation when in vulnerable situations, such as abusive facial recognition deployments and trials of ‘AI lie detectors at EU borders. And hyper-targeted adverts have led to exclusion and stigmatisation of already minoritised individuals and groups. All such examples rely on a power imbalance between those deploying the technology, and those targeted for experimentation and manipulation.

How are we represented in data?

One of the main threats to human dignity is the way systems purport to detect, measure and categorise sensitive aspects of our lives, bodies and identities. For example, facial recognition and other biometric mass surveillance technologies rely on measuring and sensing people’s faces and bodies in intrusive ways and in many cases without their knowledge. This data is used to create templates of us to be stored in vast databases, with fewer and fewer safeguards on how those in power can access and use the information. Often, governments use narratives around security, terrorism and migration control to create a political appetite for further intrusion.

In some applications (such as emotion recognition), the system adds an extra layer of judgment, premised on the idea that you can tell something about a person’s character or intentions by how they look. For people across different cultures, neuro-diverse people, trans and gender non-confirming people, the results can be a denial of access to public spaces, to public services or a public singling out for not being “normal”.

In other contexts, we’ve seen how AI systems are used to decide whether or not we get welfare or other social provisions, whether we get a job, or to determine the frequency of our encounters with the police.

This is why EDRi and over 60 other civil society groups, many of whom are involved in the Digital Dignity coalition, have called for “red lines” when it comes to uses of AI. We argued that those regulating AI must draw clear limits on which systems should be used, always centring on fundamental rights. This must mean prohibiting uses of AI that reinforce structural discrimination and enable mass surveillance.

Digital Dignity Coalition: Our collective work 

In our first few meetings, we took the time to get to know one another, to hear about each other’s experiences working in human rights and social justice communities and movements, and to share case studies of how racialised and minoritised people are already being targeted online or discriminated against as a result of the use of digital technologies.

Following these workshops, we took some time to reflect and strategise in order to come up with two priorities for the coalition:

  1. To create regular informal space in the EU advocacy landscape where we can get to know each other more, share experiences, strategies, reflect, and also share specific knowledge on digital topics, individual and collective harms, as well as our own digital skills and safety;
  2. To develop working groups on various EU digital policies such as the EU AI legislation; Policing and Technology; and de-bunking inaccurate, misleading and frequently discriminatory ‘securitisation’ narratives from the perspective of digital/human rights.

So far in 2021, the coalition has been especially focused on the European Commission’s proposed “Artificial Intelligence Act” (AIA), which will have major ramifications on people’s rights and freedoms across the EU. Across four workshops this year, we have shared insights and experiences with one another about the content of the Act, and have started to chart what might need to change in order to redress this.

One specific example of how the Digital Dignity coalition is already helping broaden one another’s perspectives is that EDRi’s feedback to the Commission on the Act specifically included a recommendation to include gender identity as a protected characteristic, thanks to the advice of the group. Many other points raised by the coalition have been incorporated into EDRi’s submission; and some organisations from the EDRi network and the Digital Dignity coalition also made their own submissions to call for greater anti-discrimination and fundamental rights protections in the Act.

What’s next?

As the proposed AI Act has explicitly recognised that some uses of AI cannot be compatible with the EU’s fundamental rights framework, we are entering unchartered waters. We have the potential to put unprecedented protections for dignity, non-discrimination and equality into the Act, and to ensure the rights of marginalised communities are centred in the EU’s approach.

We also see opportunities to extend beyond current discussions of AI, to also include issues of platform power and much more. We will continue to learn from and share with one another as we develop our respective positions on the AI Act. And we may go beyond policy work, into joint projects or activities to raise public awareness about digital dignity and the fight to protect it.

Dignity in the digital age is not just our aspiration: it is our right. 

Get involved:

  • Want to be a part of the Digital Dignity coalition? Please send a message to our co-ordinators, Ella Jakubowska, Sarah Chander and Fenya Fischer
  • Please note that the coalition is specifically intended to be a constructive and supportive space for activists, NGO staff or volunteers, advocates and campaigners working across human rights and social justice active at EU level and from a variety of anti-discrimination and/or inclusion perspectives. You don’t need to know anything about data, artificial intelligence or digital rights to join us.

(Contribution by:)

Ella Jakubowska

Policy Advisor

Twitter: @ellajakubowska1

C

Sarah Chander

Senior Policy Advisor

Twitter: @sarahchander