Designed by Photo Boards/Unsplash
Facial recognition is one example of an AI-enabled technology within a complex and wider history of surveillance of people of colour, racialised and marginalised communities, as well as the exclusion of people with disabilities. These technologies are used to target, differentiate, assess, and experiment on communities at the margins – ordering decisions impacting the frequency of their encounters with law enforcement, their privacy, access to social services and ability to get a job.
We have already seen that racialised groups, migrants, LGBTQI+ communities, people with disabilities, working class people, women, sex workers and many other minoritised, and historically-excluded communities and individuals are the first to suffer from the ways in which technology can amplify biases, surveil, classify and much more. In two launch workshops in December, we will start to collectively explore how the use of technologies can increase systemic inequality, and to build a coalition of groups across Europe to contest and resist digital surveillance, facial recognition and digital discrimination.
The start of this collaboration will consist of two online ‘Digital Dignity’ workshops, to be held on Monday 7 December, 13:00 – 15:30 CET and Thursday 10 December, 13:00 – 15:30 CET. These workshops will bring together groups working on a range of rights and justice issues, as a first step to explore artificial intelligence, facial recognition, how it impacts our work and communities, and the possibilities for joint collaboration and resistance. If you would like to be involved or want more information, please get in touch with the organisers at ella<dot>jakubowska<at>edri<dot>org and sarah<dot>chander<at>edri<dot>org