The EDPB’s Rorschach Test: What the data protection body’s Opinion on AI training Means for GDPR Enforcement
In December 2024, the European Data Protection Board (EDPB) released a much-awaited Opinion on AI model training. While the Opinion reaffirmed GDPR principles and underscored the need for robust safeguards, its ambiguities may leave room for regulatory evasion, reinforcing the ongoing struggle between data protection rules and commercial AI development wishes.
Why has EDPB’s long awaited Opinion on AI training sparked intense debate?
On 4 September 2024, the Irish data protection watchdog invoked Article 64(2) of the GDPR to request an examination from the European Data Protection Board (EDPB)—an umbrella body that ensures consistent application of the GDPR by coordinating the work of all EU/EEA watchdogs—regarding the processing of personal data in the context of AI training. This followed a number of complaints lodged by EDRi member noyb, which successfully challenged the practices that Meta, X and others had started implementing in the EU to feed their AI databases.
On 17 December 2024, the EDPB released its long-awaited ‘Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models.’ The process has highlighted some insufficiencies: an expedited timeline, limited and asymmetrical stakeholder involvement, and a problematic framing. In terms of content, this document has sparked intense debate, and it will continue to do so in the near future. The main reason for this is that, the more you reflect on it, the more it resembles a Rorschach test—everyone seems to see what they want to see.
At first glance, the Opinion might appear to offer a strong reaffirmation of GDPR principles, countering significant industry pressure to relax safeguards. It seems to position data protection rights at the centre of the ongoing AI conversation, advocating for robust regulation. But many digital rights advocates were concerned because the Opinion focused on ‘Legitimate Interest’ (LI) as a basis for processing data, which weakens the requirement for companies to get explicit consent from people. This opened a door for companies to use personal data without permission, making it easier to train AI systems on people’s information, allowing them to bypass privacy safeguards, leading to potential misuse of data, surveillance, and bias in AI (not to mention the environmental costs!).
In the long run, this wouldn’t just weaken the foundations of the data protection framework but could also have far-reaching consequences for other fundamental rights. The personal data in question fuels the algorithms that big tech social media platforms use to exploit our behaviours, shape perceptions, and manipulate vulnerabilities.
