Blogs | Information democracy | Artificial intelligence (AI) | Disinformation and electoral interference | Equal access to the internet | Freedom of expression online

Who should decide what we see online?

Online platforms rank and moderate content without letting us know how and why they do it. There is a pressing need for transparency of the practices and policies of these online platforms.

By Access Now and Civil Liberties Union for Europe (guest author) · March 11, 2020

Online platforms rank and moderate content without letting us know how and why they do it. There is a pressing need for transparency of the practices and policies of these online platforms.

Our lives are closely intertwined with technology. One obvious example is how we browse, read, and communicate online. In this article we discuss two methods companies use to deliver you content: ranking and moderating.

Ranking content

Platforms use automated measures for ranking and moderating content we upload. When you search for those cat videos during lulls at work, your search result won’t offer every cat video online. The result depends on your location, your language settings, your recent searches, and all the data the search engine possesses about you.

Services curate and rank content while predicting our personal preferences and online behaviors. This way, they influence not only our access to information, but also how we form our opinions and participate in public discourse. By predicting our preferences, they also shape them and slowly change our online behavior.

They have a crucial role in determining what we read and watch. It’s like being in a foreign country on a tour where only the guide speaks the language. And the guide gets to choose what you see and who you talk to. Similarly, these online services decide what you see. By amplifying and quantifying the popularity of certain types of sensational content that boosts engagement, accompanied by the often unpredictable side effects of algorithmic personalisation, content ranking has become a commodity from which the platforms benefit. Moreover, this may lead to manipulation of your freedom to form an opinion. However, the freedom to form an opinion is an absolute right, which means that no interference with this freedom is allowed by law and cannot be accepted by any democratic society.

The automated curation of our content determines what type of information we receive and strongly impacts how much time we spend browsing the platform. Most of us don’t have enough information about how recommendation algorithms form the hierarchisation of content on the internet, and many don’t even know that ranking exists. The meaningful transparency in curation mechanisms is a precondition for enabling user agency over the tools that help to shape our informational landscape. We need to know when we are subjected to automated decision making, and we have the right to not only an explanation but also to object against it. In order to regain our agency over content curation, we need online platforms to implement meaningful transparency requirements. Robust transparency and explainability of automated measures are preconditions to exercise our rights to freedom of speech, so that we can effectively appeal against undue content restrictions.

Content moderation

Online platforms curate and moderate to help deliver information. They also do so because EU and national lawmakers impose more and more responsibility on them to police content uploaded by users, often under threat of heavy fines. According to the European legal framework, platforms are obliged to swiftly remove illegal content, such as child abuse material or terrorist content, once they are aware of its existence. We all agree that access to illegal content should be forbidden. However, in some cases the illegality of a piece of content is very difficult to assess and requires a proper legal evaluation. For instance, a video can be either a violation of copyright, or it could be freely reuploaded if used as a parody.

Drawing the line between illegal and legal can be challenging. The tricky part is that due to the scale of managing content, online platforms rely on automated decision-making tools as an ultimate solution to this very complex task. To avoid responsibility, platforms use automation to filter out any possibly illegal content. But we can’t exclusively rely on these tools – we need safeguards and human intervention to control automation.

What safeguards do we need?

Without a doubt, content moderation is an extremely difficult task. Every day, online platforms have to make tough choices and decide what pieces of content stay online and how we find them. The automated decision-making process is not likely to ever solve the social problems of hate speech, disinformation, or terrorism. While automation can work well for online content that is manifestly illegal irrespective of its context, such as child abuse material, it continues to fail in any area that is not strictly black and white. No tool should have the final say about protection of free speech or your private life.

As we stand now, online platforms rank and moderate content without letting us know how and why they do it. There is a pressing need for transparency of the practices and policies of these online platforms. They have to disclose information on how they respect our freedom of speech and what due-diligence mechanisms they have implemented. They have to be transparent about their everyday operation, their decision-making process and implementation, as well as about their impact assessments and other policies that have an impact on our fundamental human rights.

Besides transparency, we also need properly elaborated complaint mechanisms and human intervention whenever there is an automated decision-making process. Without people, with no accessible and transparent appeal mechanisms or without people being accountable for policies, there cannot be an effective remedy. If there is a chance that content has been removed incorrectly, then this needs to be checked by a real person who can decide whether the content was legal or not. We should also always have the right to bring the matter before a judge, who is legally qualified to make the final decision on any matter that may compromise our right to free speech.

Access Now

Who should decide what we see online? (20.02.2020)

Can we rely on machines making decisions for us on illegal content? (26.02.2020)

A human-centric internet for Europe (19.02.2020)

(Contribution by Eliška Pírková, EDRi member Access Now, and Eva Simon, Civil Liberties Union for Europe)