Can the EU Digital Services Act contest the power of Big Tech’s algorithms?
A progressive report on the Digital Services Act (DSA) adopted by the Committee on Civil Liberties, Justice and Home Affairs (LIBE) in the European Parliament in July is the first major improvement of the draft law presented by the European Commission in December. MEPs expressed support for default protections from tracking and profiling for the purposes of advertising and recommending or ranking content. Now the ball is in the court of the leading committee on internal market and consumer protection (IMCO), which received 1313 pages of amendments to be voted in November. EDRi's member Panoptykon Foundation explores if the Parliament would succeed in adopting a position that will contest the power of dominant online platforms which shape the digital public sphere in line with their commercial interests, at the expense of individuals and societies.
Why algorithms should be at the centre of the DSA debate
There is a growing pile of evidence on the harmful consequences of algorithms used by large online platforms to select recipients of targeted ads and to organise, rank and curate the vastness of content uploaded by their users. Algorithms that deliver ads have been found to discriminate against marginalised groups simply by the way in which they were designed, even when the advertiser did not intend it; recommender systems notoriously promote divisive, sensationalist content, leading to the erosion of public debate; and a recent study from Mozilla documented people’s experiences of the “rabbit hole” effect: recommendations of increasingly extreme content.
Google and Facebook often frame these issues as unintended consequences of otherwise fair and useful personalisation systems and promise to “do better” in the future. But these cosmetic interventions do not have the potential to address the very core of the problem: the harmful logic of these systems which results from platforms’ commercial interests. As companies like Google, Facebook, Twitter or Tik Tok make profit mainly from targeted advertising, their overarching business goal is relatively simple: to display as many ads as people can handle without discouraging them from using the platform. They must grab and maintain users’ attention in order to maximise the time they spend on the platform, because more time equals more data left behind and more ad impressions. These goals are deeply embedded in the design of the algorithmic systems in use and in themselves lead to individual and societal harms.
European institutions should create conditions to increase users’ ability to arrive at well-informed decisions and make them less vulnerable to a range of manipulative and harmful inferences. A means to this end is to enhance transparency and accountability of algorithms used by dominant platforms and to increase users’ control over the information they share and access online.
Meanwhile, mainstream debates on the DSA, both in Brussels and in Poland, largely let this aspect pass, focusing rather on how to deal with illegal or harmful content posted by users. The debate over how the platforms’ infrastructure itself contributes to the problems we see, how the maximisation of attention and extraction of data drive the way in which users are profiled and content is curated, ranked, and recommended, and how to fix power imbalances in the digital public sphere, is limited to a small number of progressive MEPs and digital rights organisations.
Starting point: a safe proposal from the European Commission
The draft of the DSA presented by the Commission in December 2020 was a missed opportunity to tackle the problems of the algorithmic influence machine, focusing on providing the legal background for already existing practices and tools (such as ad libraries or “why am I seeing this” boxes). The Commission did not introduce any limits to invasive profiling of users or the data that may be used to target ads and recommend content. Transparency of advertising focused on the role of the advertiser, instead of extending to the equally crucial role of the platform itself which optimises the delivery of ads (or, in less technical terms, selects final recipients of the ad who not only simply meet the advertiser’s targeting criteria, but also are more likely than others to react to the ad in a way that the advertiser desires). While the Commission’s version of the DSA would allow users to opt out of profiling for recommender systems, it did not prohibit platforms from using dark patterns to nudge them away from doing so. Although it created a promising risk assessment regime, it did not give users the agency to challenge the very logic of algorithms used in recommender system.
Towards better protection of fundamental rights – Panoptykon’s demands
At Panoptykon we advocate for a better regulation of content governance algorithms which can be summarised in three demands: protect by default, increase platform accountability, and give users meaningful agency.
1.Default protection from profiling
Today vast troves of data that users leave as they navigate the different services of online platforms and websites which embed their trackers are analysed in order to come up with predictions about characteristics that users have not explicitly revealed (so-called inferred data). These features may be extremely intimate and sensitive, e.g. they may relate to the users’ physical and mental health, addictions, triggers of anxiety or fear, financial situation, important events in life (pregnancy, divorce), race or political leanings.
Prohibiting the use of sensitive data or other protected characteristics seems to be a straightforward solution. But it will be ineffective, because the way that machine learning algorithms work allows them to find so-called proxy data: data that is closely related to sensitive attributes. For example, the number of hours spent daily on a platform may be a proxy for age or employment status, and the set of liked pages may be a proxy for gender, sexual orientation, or political opinion. An algorithm may recognise individuals as similar, and treat them as such, without labelling them as “male” and “female” or “liberal” and “conservative”. Even if we identify known proxies and ban them, the algorithm will soon find new correlations: The Markup showed that even if controversial categories are removed, Facebook’s algorithms quickly find new proxies (or correlations) that bring about the same outcome, e.g. target people based on race without ever identifying a certain piece of information (e.g. ZIP code) as correlating with this characteristic. This is what algorithms are designed to do and it is precisely where Facebook and Google’s dominance on the advertising market stems from.
However, these problems can be avoided, if targeting can only be based on data explicitly provided by users and easily controlled. That’s why Panoptykon advocates for a total ban on using inferences to target ads or curate content. Instead, users could – if they wish to – directly provide information about them that they wish to be used for personalisation (e.g. a set of liked pages or specific subjects that they are interested in).
At the same time, users should be able to do nothing and still be protected. That’s why the default option should be set on “no personal data” and companies should not be able to nudge users – with the use of dark patterns – to provide any information about them.
2.Algorithmic accountability
Algorithms that platforms use to deliver advertising or recommend and rank content should be subject to assessments as to their potential of causing harm for fundamental rights, including privacy or freedom of expression. Because companies will naturally tend to prioritise their business interests, self-assessments must be submitted for an independent audit and approval of the regulator who should have the power to impose specific mitigation measures, including changes of specific parameters or signals on which the algorithm relies. Assessments must also be available for scrutiny by public interest researchers, civil society organisations and journalists.
For the supervisory body to accurately assess harms and for vetted researchers, CSOs and journalists to be able to identify abuses, dominant platforms should provide them access to data and models used by algorithmic systems, including access to all signals that they rely on, metrics that the system optimises and possibility to run tests. Today access to data relies on the goodwill of dominant platforms who allow research as long as it does not endanger their business interests: otherwise, they swiftly remove access and in the case of Facebook – even sue researchers who dig a little bit too deeply, for the platform’s liking.
3.Increased user agency and rebuilding the online ecosystem
Real user empowerment cannot happen without creating the conditions for meaningful choice and the development of alternatives that are based on different business logic. That’s why the EU should require the biggest platforms, that have the most power over our digital public sphere, to ensure that users may choose alternative recommender systems, provided by third parties, be it companies or non-profits. We could imagine a recommender system that, instead of amplifying sensationalist content designed to attract attention which is the basic commodity in surveillance capitalism, is instead designed to prioritise more valuable content. This solution, technically based on interoperability, paves the way for the development of European alternatives to Big Tech. In the not so distant future, we could imagine a brand-new market in which new solutions, developed for instance by a Polish startup or by a media organisation, would try to gain users’ trust.
Building on these three pillars, Panoptykon has drafted amendments to the original version of the DSA, hoping to convince MEPs from the leading IMCO committee to file proposals along these lines. Our advocacy efforts were largely successful.
Progressive amendments in the European Parliament
In July the IMCO committee published 1313 pages of over 2200 amendments submitted by committee members. There are some proposals further weakening the Commission’s draft law, but most importantly there are also bold ideas aiming at rebuilding the online space as we know it (and implementing a lot of Panoptykon’s demands).
A few examples of progressive amendments that we support:
- A prohibition of behavioural advertising. We are rooting for it, but acknowledge that it may be difficult to secure a political majority for this idea. A compromise option could include Panoptykon’s idea of allowing personalisation in ads only on the basis of data directly provided by users (see point 1 of our demands), thus restricting the most harmful version of data-driven advertising while not banning non-intrusive personalisation.
- A prohibition of dark patterns. An extremely important proposal that would effectively limit the manipulative practices of online platforms which distort real choice and debase the meaning of consent.
- Transparency of the optimisation process. One of the amendments provides a crucial differentiation between ad targeting (controlled by the advertised) and ad delivery (controlled by the platform) and mandates transparency of both.
- Enhanced transparency of algorithms. A number of amendments aims at making algorithmic transparency a reality by specifying that platforms must reveal the optimisation goal of the algorithm and the criteria that the system relies on together with an indication of the importance that they have on delivering the outcome.
- Profiling in recommender systems turned off by default and the possibility to use third-party recommenders. As explained above, this proposal has the potential to truly rebuild the ecosystem, moving away from platform-controlled systems optimised for engagement, to increased user agency.
- A robust risk assessment regime fixes the weaknesses of self-evaluation by online platforms and empowers regulators and civil society to effectively hold them to account.
In the best scenario, the digital public sphere of the future could be a safe, non-intrusive space, where people’s fundamental rights are protected and where those wishing to shape their own experience would have the tools to do so, while those who would rather rely on the platform, would be protected by default from different harms that unaccountable algorithms may cause.
In the worst scenario, we will experience a further exacerbation of existing problems: mass violations of privacy, discrimination, exploitation of vulnerabilities, erosion of public debate, the use of platforms’ advertising machinery to facilitate political manipulation, not to mention the impact of attention-maximising infrastructures on individual well-being, mental health and broader social mechanisms.
Most likely, the final outcome in the European Parliament will be a compromise between the two. What shape this compromise will take, will be decided in autumn, with the vote in the IMCO Committee scheduled for 8 November, followed by a plenary vote in December. Panoptykon, together with the EDRi network, will continue to actively engage in this legislative process.
The article was published on 2 August 2021.
(Contribution by: Karolina Iwańska, Lawyer and Policy Analyst, EDRi member Panoptykon Foundation)