Potential loopholes in the AI Act could allow use of intrusive tech on ‘national security’ grounds
Both the European Union (EU) and the Council of Europe (COE) negotiations are considering excluding AI systems designed, developed and used for military purposes, matters of national defence and national security from the scope of their final regulatory frameworks. If this indeed happens, we will have a huge regulatory gap regarding such systems.
Both the European Union (EU) and the Council of Europe (COE) negotiations are considering excluding AI systems designed, developed and used for military purposes, matters of national defence and national security from the scope of their final regulatory frameworks.
If this indeed happens, we will have a huge regulatory gap regarding such systems. Civil society and human rights defenders are rightfully concerned: even those AI systems that present “unacceptable” levels of risks and therefore prohibited by the EU AI Act could be easily “resuscitated” or “recycled” for the exclusive purpose of national security.
This would affect our freedoms of movement, assembly, expression, participation and privacy, among others.
In a new blog, EDRi affiliate ECNL‘s Francesca Fanucci and Catherine Connolly from Stop Killer Robots’ Automated Decision Research argue that the EU and the COE must ensure that:
- There are no blanket exemptions for AI systems designed, developed and used for military purposes, matters of national defence and national security;
- Such systems undertake risk and impact assessments before they are deployed and throughout their use.
Tech nominally referred to as “developed or used exclusively for national security purposes” may fundamentally affect our civic freedoms. Read the full blog how and why.
This article was first published here by ECNL.
Contribution by: EDRi affiliate ECNL & Stop Killer Robot