EU’s AI proposal must go further to prevent surveillance and discrimination
The European Commission has just launched the EU draft regulation on artificial intelligence (AI). AI systems are being increasingly used in all areas of life – to monitor us at protests, to identify us for access to health and public services, to make predictions about our behaviour or how much ‘risk’ we pose. Without clear safeguards, these systems could further the power imbalance between those who develop and use AI and those who are subject to them.
The European Commission has just launched the EU draft regulation on artificial intelligence (AI). AI systems are being increasingly used in all areas of life – to monitor us at protests, to identify us for access to health and public services, to make predictions about our behaviour or how much ‘risk’ we pose. Without clear safeguards, these systems could further the power imbalance between those who develop and use AI and those who are subject to them. This is why a strong AI regulation from the EU is needed.
“Whilst it is positive that the Commission acknowledges that some uses of AI are simply unacceptable and need to be prohibited, the draft law does not prohibit the full extent of unacceptable uses of AI and in particular all forms of biometric mass surveillance. This leaves a worrying gap for discriminatory and surveillance technologies used by governments and companies. The regulation allows too wide a scope for self-regulation by companies profiting from AI. People, not companies need to be the centre of this regulation”
Sarah Chander, Senior Policy Lead on AI at European Digital Rights
Civil society has demonstrated how AI that is being used in Europe for predictive policing, mass surveillance, at the border and to judge and predict our behaviour on the basis of our bodies, emotions and sensitive identity traits (like race, gender identity and disability) is in complete violation of our rights and disproportionately affect marginalised groups. Yet the proposal neither requires legal limits on the development of all problematic uses of AI, nor asks for sufficient safeguards from deployers of “high risk” AI. This results in a failure to provide protection and remedies for people who are likely to endure harms as a result of AI. It will also continue to place the burden on civil society to challenge opaque and unjust AI.
“Biometric mass surveillance reduces our bodies to walking barcodes with the intention of judging the links between our data, physical appearance and our intentions. We should protect this sensitive data because we only have one face, which we cannot swap or leave at home. Once we give up this data we will have lost all control.”
Lotte Houwing, from EDRi member Bits of Freedom (The Netherlands)
The proposal has taken a significant step to protect people in Europe from creepy facial recognition by banning law enforcement from using “real time remote biometric identification” in specific ways, requiring specific national laws and case-by-case authorisation to allow such practices. Whilst this is a move in the right direction, there are many biometric mass surveillance practices (by local governments or corporations) which are not covered, and a series of highly worrying exceptions that would allow law enforcement to find loopholes. This is a missed opportunity by the Commission to take a truly comprehensive approach against all forms of biometric mass surveillance.
The majority of requirements in the proposal naively rely on AI developers to implement technical solutions to complex social issues, which are likely self assessed by the companies themselves. In this way, the proposal enables a profitable market of unjust AI to be used for surveillance and discrimination, and pins the blame on the technology developers, instead of the institutions or companies putting the systems to use.
EDRi together with 60+ human rights organisations and 116 MEPs asked the European Commission to follow through on their promise of creating a truly people centred AI regulation. Now, it is time for the European Parliament to ensure that the AI Regulation goes beyond what the European Commission has proposed and includes a complete ban on AI uses that are incompatible with fundamental rights, ensures full transparency and accountability for all high risk systems in the EU and substantial mechanisms for collective redress for those who have been harmed by AI systems.
(Image credit: Lorenzo Miola/ Fine Arts)