AI Regulation: The EU should not give in to the surveillance industry
Although it claims to protect our liberties, the recent European Commission’s legislative proposal on artificial intelligence (AI) promotes the accelerated development of all aspects of AI, in particular for security purposes.
Loaded with exceptions, resting on a stale risk-based approach, and picking up the French government’s rhetoric on the need for more experimentation, this text must be transformed radically. In its current state it risks endangering the slim legal protections that European law provides in face of the massive deployment of surveillance techniques in public space, eroding core principles of the General data protection regulation (GDPR) and the Law Enforcement directive (see LQDN’s detailed analysis here).
Far from suspending AI systems that obviously violate European law (like facial recognition systems), the proposal limits itself first to prohibiting four specific “uses” while providing broad exemptions to national authorities. These prohibitions are so narrow and poorly defined that they give the impression that the Commission’s aim was to authorise the widest possible measures rather than to prohibit any (on this subject, see EDRi’s complete analysis of the draft regulation). Indeed, the use of biometric identification systems is prohibited in “real time” except to find “potential specific victims of criminality” or the “prevention of specific substantial, imminent threats on the lives of natural persons” or the “prevention of terrorist attack”. One understands that with such broad exceptions, this “prohibition” is actually an authorisation, and not at all a prohibition of facial recognition.
This draft also introduces into the regulation a distinction long desired by the representatives of the security industry: between biometric surveillance in “real time” and ex post. This distinction aims above all to reassure several European police forces (in France in particular) that are already massively using facial recognition (see LQDN’s article about French police’s facial recognition use here).
More broadly, the safeguards proposed to regulate AI systems are generally inadequate to guarantee an effective control, as most of these systems should only subject to a system of self-certification. If this approach, based on risk analysis, is supposed to reassure the private sector, it fails entirely to guarantee that AI suppliers will respect and protect the human rights of individuals (see the analysis of EDRi member Access Now here). Manufacturers and public authorities are used to performing such analyses, which suit their purposes quite well. This already happened with the discreet launch of facial recognition technologies in the city of Nice where the mayor’s office sent its analysis to the French regulator a few days before their deployment.
The Commission has therefore made a qualitative leap in its efforts toward a “better law” by anticipating and satisfying, before even opening negotiations, the lobbying campaigns of the industrial security apparatus. Indeed, this draft law consolidates a political agenda where introducing AI is necessary and inevitable for entire sectors of society. It rests on a naïve and fantasised vision of these technologies and the companies that supply them. As the Council and the European Parliament are now forging their own positions on this proposal, the situation could become even worse, as several member states now want a separate law for law enforcement with, we imagine, even looser limitations and even wider exceptions.
Image credit: Patrick Robert Doyle / Unsplash
(Contribution by: EDRi member, La Quadrature du Net)