When technology is the problem, not the solution: Lessons from harmful consequences of techno-solutionism in digital surveillance
AI-powered surveillance systems are being deployed globally - from Israel and Russia to EU member states. These systems target marginalised communities under the guise of improving security and efficiency. To rectify these harms, we must challenge techno-solutionist narratives and rethink why and how technology is used, and center human rights.
More ‘innovation’, more violations
In early 2025, two developments shed light on the real-world consequences of techno-solutionist narratives when it comes to security policies. In Israel, the military’s infamous Unit 8200 began deploying an AI-powered system that tracks human rights defenders and Palestinian communities in the West Bank. As reported by The Guardian, the AI system allows military operatives to monitor construction, follow people’s movements, and “predict social unrest” across Palestinian occupied territories. As one source explained, with this system “I have more tools to know what every person in the West Bank is doing”.
Shortly after, in Russia, the city of St. Petersburg introduced ‘ethnicity recognition’ software in 8,000 of its 102,000 CCTV cameras – which is, by the way, one camera per every 50 residents. Authorities justified the measure as a way to prevent “ethnic enclaves” and “reduce social tension”. While framed as an urban planning tool, this system effectively enables real-time racial profiling on a mass scale.
We could think that Russia and Israel are just another example of authoritarian governments doing what “bad guys” do – oppressing their own populations, controlling those that are already in a vulnerable position – but we would be wrong. Similarly worrying trends have been going on for a while within the EU. In countries like Cyprus, Greece and Spain – and up to 9 other EU States – authorities have tested or deployed AI-powered systems to monitor migrant movements and border crossings in opaque ways. These systems include predictive policing algorithms, risk profiling, and automated surveillance drones, often justified as enhancing “security.”
These are not isolated cases. They reflect a global trend: using advanced technologies to control, disproportionately target marginalised groups, and reshape societies through digital surveillance.
“Fortress Europe” is being built with code and cameras
While the contexts differ, the same long-standing digital securitisation logic behind ethnic recognition systems and the monitoring of entire populations can be observed as part of the EU’s policies and practices. A recent example from the law enforcement perspective: In March 2025, the Hungarian government passed a law criminalising the right to peaceful assembly, and allowing Remote Biometric Identification (RBI) to be deployed to identify – and prosecute – those attending Budapest Pride, thus violating the AI Act.
Consider more examples from the migration context: EU’s Pact on Migration and Asylum expanded the scope of the EURODAC database – historically gathering fingerprints of asylum-seekers. Now, it mandates the collection of images of people’s faces to use facial recognition on children as young as six. This reflects a securitised view of migration, where people on the move are primarily treated as threats to be managed, and their personal data is processed for the sole purposes of surveillance and profiling. The Pact also leads to increased digitalisation of migration ‘management’, which according to Von der Leyen’s Commission, aims to “provide increased efficiency in procedures”.
Meanwhile, the AI Act gave a blanket exemption to national security uses of artificial intelligence systems – which is how migration ‘management’ is increasingly framed in countries like Italy and Hungary. Also, as denounced by the Protect not Surveil coalition , the AI Act failed to ban many harmful systems in migration and law enforcement contexts.
On top of the legislative flaws, the EU is also directing European tax-payers’ money to develop and test thesetools for use by police and border guards. Past EU-funded projects such as iBorderCtrl and ROBORDER have trialled polygraphs based on emotion detection AI (a flawed technology based on junk science with no validity) and autonomous drones at external borders. Another EU program, AEGIS, is being developed in the Netherlands, Romania and Spain, and will lead to the installation of AI-powered cameras in Antwerp. These will allegedly be used to “detect threats” and respond to them in order to “protect Jewish communities”.
These systems, while experimental in some cases, shape and affect the nature of law enforcement and border control activities, further criminalising whole communities. Parallelly, this digitalisation leads to a privatisation of public services – border management and policing, for example – which of course gives a lot of incentive to companies in the military and security sectors to keep “innovating” and making money from public procurement – profiting from the policing of our faces, bodies and identities.
The underlying assumption behind such uses of technology is the same: that technology is efficient and neutral, and that more surveillance equals more safety. That instead of investing in actual solutions, if we “throw” tech at social problems, something will stick and we’ll be able to better manage them.
Israel and Russia provide a clear example that these tools do not operate in a vacuum. They reinforce existing power dynamics, automate discrimination, and amplify state control over populations in vulnerable situations. The same happens when EU member states deploy them.
We need to rethink why we deploy technology
To challenge this techno-solutionist narrative, we need to rethink why we deploy technology in the first place. Surveillance tools are not just digital infrastructure that make our countries more “modern”, they are instruments of power by nature and definition. From predictive policing analytics to facial recognition, these systems are shaped by the political and social contexts in which they are built. They replicate and intensify existing inequalities, particularly when deployed in areas like border and migration control, in which they further criminalise those already in the margins.
The examples of Russia, Israel and EU’s migration policies illustrate how AI can be wielded to dominate, discipline and exclude. In both cases, the issue is not merely the data gathered – the amount, or whom it targets disproportionately – but also the discretionary power it grants to officials to make decisions that impact lives, thinking that the machine is always right. In the EU, framing surveillance as a matter of “efficiency” or “innovation” risks masking its true function: social control of those at the margins.
If the EU continues to treat technological systems as neutral, it will end up replicating precisely the kinds of abuse it claims to oppose. Instead, our approach to technology and its deployment to ensure communities’ safety should be grounded in the respect of human rights, accountability, and historical awareness. That means, among other things, recognising how surveillance infrastructure echoes colonial logic and racialised control, and resisting the temptation to offload political responsibility to machines.
Civil society has long warned against techno-solutionism. EDRi and its members, like La Quadrature du Net and Access Now, have consistently argued that alternative ways of managing our social crises are possible and should be rooted in racial justice, dignity and democratic values, and would require reimagining our relationship with and conception of technology altogether.
