EU legislators must close dangerous loophole and protect human rights in the AI Act
Over 115 civil society organisations are calling on EU legislators to remove a major loophole in the high-risk classification process of the Artificial Intelligence (AI) Act and maintain a high level of protection for people’s rights in the legislation.
As the European Union enters the final stage of negotiations on the AI Act, civil society is concerned about a major loophole to the high-risk classification process of the legislation that Big Tech and other industry players have lobbied to introduce.
More than 115 civil society organisations are urging MEPs to stand against the tech and industry lobby, and reverse these changes to restore the Commission’s original language in Article 6 of the AI Act. This is the only way to ensure that the rights of people affected by AI systems are prioritised and that AI development and use is both accountable and transparent.
The changes in the AI Act would allow developers of AI systems to decide themselves if they believe their system is ‘high-risk’. Letting companies decide the risk classification of their AI systems undermines the human rights protection afforded by the legislation – companies have profit incentives to understate the risks their system poses.
These changes to Article 6 must be rejected and the European Commission’s original risk-classification process must be restored. There must be an objective, coherent and legally certain process to determine which AI systems are ‘high-risk’ in the AI Act.