The AI Act isn’t enough: closing the dangerous loopholes that enable rights violations

While the EU's AI Act aims to regulate high-risk AI systems, it is undermined by major loopholes that allow their unchecked use in the context of national security and law enforcement. These exemptions risk enabling, among others, mass surveillance of protests and discriminatory migration practices. To prevent this, EDRi affiliate Danes je nov dan has published recommendations for Slovenia to adopt stricter national safeguards and transparent oversight mechanisms.

By Danes je nov dan (guest author) · November 13, 2025

The AI Act’s broad exemptions for national security

The primary goal of the AI Act is to protect human rights from the risks posed by artificial intelligence (AI). However, this objective is immediately undermined by a critical limitation: the regulation does not apply at all to AI systems used for military, defence, or national security. This approach stems from the Treaty on the European Union (TEU), which gives Member States exclusive competence over their national security. The result, however, is a dangerous grey area where fundamental rights could be significantly undermined.

Such a wide-ranging exclusion creates a substantial risk of abuse, particularly due to the often-blurry line between national security and other activities by law enforcement agencies. For example, authorities could subject protests or other forms of public assembly to AI surveillance by citing “national security,” thereby circumventing the AI Act’s stricter rules for law enforcement. The mere possibility of such surveillance could have a chilling effect on the rights to protest, privacy, and free assembly, weakening democratic processes.

However, the TEU does not mandate complete exclusion from the scope of EU law. When using AI systems, countries must respect general principles, such as the principle of proportionality. This means that any measure must be necessary and appropriate to achieve a specific objective, while respecting the essence of fundamental rights.

Weak safeguards for high-risk systems

Even for systems that are technically covered by the Act, the rules designed to protect the public are dangerously weak. The list of “prohibited practices” for high-risk areas like law enforcement and migration is actually a list of conditional prohibitions with vague, exploitable exceptions. This approach fails to provide the robust protection that civil society organisations have repeatedly called for.

Additionally, the Act is dangerously incomplete when it comes to the areas of migration and border control. It fails to classify many harmful AI systems – such as predictive analytics tools used for profiling and restricting migration – as high-risk. This, combined with weakened transparency obligations for uses in law enforcement and migration, prevents effective public oversight and makes it nearly impossible for affected people, civil society, and journalists to know where and how some of the most harmful systems are being used.

A critical concern is also the arbitrary distinction the Act makes between “real-time” and “post-remote” biometric identification. While real-time facial recognition is heavily restricted, the use of post-remote identification (analysing footage after an event) is subject to much looser conditions and a weaker authorisation process. This could, for example, allow law enforcement to identify participants in a political protest from recordings, even without suspicion of a crime, leading to mass surveillance and violating fundamental rights just as severely as real-time systems would.

Key recommendations for a rights-based approach

To close these dangerous gaps and ensure technology serves people rather than endanger them, it is crucial that EU Member States adopt stronger national safeguards. Therefore, EDRi affiliate Danes je nov dan called on the Slovenian government to lead the way by implementing the following measures:

  1. Adopt a clear protocol for AI use in national security
    The government must adopt a clear and strict framework that narrowly defines the use of AI for national security purposes. This will ensure a clear demarcation between activities that fall within the areas of prevention, detection or investigation of criminal offences and the area of ​​national security, avoiding dangerous overlaps.
  2. Implement stricter rules for high-risk AI systems
    Slovenia should adopt more restrictive measures for AI systems used in law enforcement and migration, especially concerning remote biometric identification. This also includes refraining from passing any new legislation that would permit the use of AI systems to detect, prevent, investigate or prosecute criminal offences to circumvent transparency obligations from Article 50 of the Act.
  3. Ensure robust and transparent authorisation procedures
    The process for authorising the use of post-remote biometric identification must be clearly and strictly defined in national law, including the procedure for obtaining permission and the steps to be taken in case of refusal. This will bind authorities to clear restrictions and ensure an adequate level of human rights protection.
  4. Establish comprehensive public oversight and control mechanisms
    All AI systems used by the public sector, including for law enforcement and national security, should be listed in a public AI registry to enable public control. The state must also establish an effective internal system for monitoring these systems and for their immediate suspension if any violation of fundamental rights is found.

Building stronger national safeguards

The AI Act provides a foundation for regulating this field, but its effectiveness depends on the actions of Member States to address its shortcomings. The recommendations for Slovenia provide a clear path forward for how strategic national legislation can close regulatory gaps and lead to democratic accountability. By adopting these measures, governments across the EU can ensure that the deployment of new technologies does not come at the cost of our fundamental rights.

Read the full recommendations and use this framework to demand greater AI transparency and accountability in your country.

Contribution by: EDRi affiliate, Danes je nov dan