The EU’s Artificial Intelligence Act: Civil society amendments

Artificial Intelligence (AI) systems are increasingly used in all areas of public life. It is vital that the AI Act addresses the structural, societal, political and economic impacts of the use of AI, is future-proof, and prioritises affected people, the protection of fundamental rights and democratic values. The following issue papers detail the amendments of civil society following the Civil Society Statement on the AI Act, released in November 2021.

By EDRi · May 3, 2022

The European Union institutions have taken a globally-significant step with the proposal for an Artificial Intelligence Act.

Artificial Intelligence (AI) systems are increasingly used in all areas of public life. It is vital that the AI Act addresses the structural, societal, political and economic impacts of the use of AI, is future-proof, and prioritises affected people, the protection of fundamental rights and democratic values. The following issue papers detail the amendments of civil society following the civil society statement on the AI Act, released in November 2021. 

The following issue papers were drafted in collaboration with a number of civil society organisations, including Access Now, Algorithm Watch, Bits of Freedom, European Digital Rights (EDRi), European Disability Forum (EDF), European Not for Profit Law Center, Fair Trials, Panoptykon Foundation, and PICUM.

Updating the risk based system: Ensure a future-proof AI act, allowing all risk categories (unacceptable, high risk, limited risk) to be updated to adapt to a changing technology market. Drafting led by Access Now. 

Prohibit Remote Biometric Identification in publicly accessible spaces: Expand the limited prohibition by applying it to all uses (real-time and post) of RBI in publicly-accessible spaces, by all actors, without exceptions.

Prohibit predictive policing: A full prohibition on predictive policing systems to prevent discriminatory practices and undermining the presumption of innocence. Drafted with Fair Trials. 

AI in Migration and border contexts: Outlining the need to update the AI Act to include prohibitions in the migration context, update the high-risk list, and amend Article 83 to ensure all high risk systems in migration are regulated, including those as part of EU IT systems. Drafted with Access Now, PICUM, and Statewatch.

Emotion recognition: Include a comprehensive prohibition on emotion recognition, with a limited exemption for legitimate uses for assistive technologies for people with disabilities. Drafting led by Access Now, EDRi and EDF.

Biometric Categorisation: Prohibit remote biometric categorisation in publicly accessible spaces, and any discriminatory biometric categorisation Drafting led by Access Now and EDRi.

Obligations on users and fundamental rights impact assessments: Introducing obligations on users of high-risk AI, designed to greater transparency as to how high-risk AI is used, and ensure accountability and redress for uses of AI that pose a potential risk to fundamental rights. 

Ensure consistent and meaningful public transparency: Ensuring transparency to the public as to which AI systems are used, when and for what purpose. Drafting led by Algorithm Watch. 

Ensure meaningful transparency of AI systems for affected people: Ensure people affected by AI systems are notified and have the right to seek information when affected by AI-assisted decisions and outcomes. Ensure Article 52 reflects the full range of AI systems requiring individual transparency. Drafting led by Panoptykon Foundation. 

Rights and redress for people impacted by AI systemsEnsuring people affected by AI systems are adequately protected, and have rights and the availability of redress when their rights have been impacted by AI systems. 

Ensure horizontal and mainstreamed accessibility requirements for all Artificial Intelligence (AI) systems and userecommendations on the inclusion of accessibility requirements in the development and deployment of AI systems. Drafting led by European Disability Forum. 

Sustainability transparency measures for AI systems: Ensure minimum transparency on the ecological sustainability parameters for all AI systems in the AI Act. Drafting led by Algorithm Watch. 

Strictly regulate high-risk uses of biometrics: Ensure that for permissible uses of biometric systems, rigorous safeguards and protections are in place to protect these sensitive data and the enhanced risks when they are processed using AI.

Set clear safeguards for AI systems for military and national security purposes: Ensure that the scope of the AI act is not narrowed with a blanket exemption for national security. Drafting led by ECNL.

Standards and standardisation process: Ringfence the role of standards in the AI Act to ensure that they are only used to apply the legal and political decisions of the co-legislators, and work towards a more inclusive and accessible standardisation process to ensure genuine and meaningful representation of civil society, national bodies, and affected groups.