Big Tech platforms are hurting us. 50 organisations urge the EU to #FixAlgorithms

 The list of negative consequences of how dominant online platforms shape our experience online is neither short nor trivial. From exploiting users’ vulnerabilities, triggering psychological trauma, depriving people of job opportunities to pushing disturbing content to others, these are just some examples. While members of the European Parliament debate their position on the Digital Services Act (DSA), EDRi’s member Panoptykon Foundation (Poland), together with 49 civil society organisations from all over Europe, including EDRi, Amnesty International, Article 19, European Partnership for Democracy and Electronic Frontier Foundation, urge them to ensure protection from the harms caused by platforms’ algorithms.

By Panoptykon Foundation (guest author) · September 22, 2021

The Ugly Face of Data-Hungry Algorithms

Ad delivery algorithms and recommender systems are responsible for what we see once we visit Facebook or YouTube. Their code may be complicated – it is artificial intelligence after all – but their job isn’t. The goal is to maximise the platforms’ profits from surveillance-based advertising. And that translates to keeping the user on the platform, so that they can watch more ads, while leaving more and more traces to be collected by data-hungry algorithms.

But you can’t make an omelette without breaking eggs. In this case, the eggs being users’ self-image (affected by the algorithm’s choice of photos on Instagram), the quality of the public debate (recommender systems notoriously promote divisivesensationalist content), or access to job offers (data-driven ad delivery algorithms, which select viewers from larger sets of eligible targets, have been shown to discriminate against people based on gender, race, or age).

Platforms know more about their users than they tell them. Every bit of a users’ online activity, on and off the platform, is used to make predictions about them in order to determine the content they will see online, or not. Advertisers may not intend to discriminate against anyone, but the algorithmic fixation on campaign targets can have that effect. 

The pile of evidence on the harmful consequences of algorithms used by large online platforms is growing – although investigating them is difficult due to pervasive opacity.

Civil Society Calls for Improvements in the DSA

The debate on the draft DSA proposal presented by the European Commission largely focuses on issues related to user content moderation. Although important, they are less inconvenient for platforms because they do not challenge their surveillance-based business model or affect their attention-maximising algorithms. But human rights defenders are not going to let it go: 49 civil society organisations – including European Digital Rights, Amnesty International, Article 19, European Partnership for Democracy and Electronic Frontier Foundation – have joined Panoptykon in calling on the members of the European Parliament’s internal market and consumer protection committee to empower users and ensure effective oversight of algorithms in their amendments to the DSA.

Protection by default is an essential part of the solution. Users should not be forced to abdicate control of their data as a condition of access to a service. By default, they should be able to use the platform without having to share their personal data for advertising or recommendation purposes. The DSA should also prohibit the use of deceptive interfaces or consent screens, designed to impair users’ free choice. 

 But for users’ choices to be actually informed and free, the algorithms need to be more transparent. You can’t make an informed choice unless you know how algorithms used by a platform work. Thus, disclosure of all the key information about the algorithms is our baseline, as it will give users better insight into how content they see is selected.

In addition, access to data for academic researchers, journalists and civil society organisations is crucial to scrutinise how algorithms work and audit their effects. In the past it was journalists, independent researchers and civil society organisations who shed light on the harmful consequences of platform algorithms. But access to data depended on the company’s good will – which has its limits. Especially when the findings could cause publicity that is not welcomed (like those published by AlgorithmWatch).

Last but not least, organisations demand that users’ can file reports when they find content recommended by the platform objectionable. Users should be able to modify the recommendation system so that it works for them. But they should also be allowed to break away from the platform’s centralised system and choose an independent recommendation service – commercial or not – that better aligns with their interests. Signatories of the letter argue that it is this ecosystem innovation that has the potential to truly empower users as well as exert pressure on big platforms to make real improvements in their own systems

The great data protection reform has failed to curb the negative effects of big tech’s data-driven optimisation algorithms. Instead of real influence on how our data is gathered and used by companies, ‘consent’ is hidden in the Terms of Use and deceptive interfaces which nudge users towards making choices they wouldn’t have otherwise made. The Digital Services Act is a unique opportunity to fix this. Are MEPs going to wield it? Civil society organisations are doing what they can to persuade them to use the solutions at their disposal to enable a digital world which is both innovative and beneficial to society.

Image credit: Bitteschoen.tv/Hertie School

(Contribution by: Anna Obem, Managing Director & Karolina Iwańska, Lawyer and Policy Analyst, EDRi member Panoptykon Foundation)