Blogs | Information democracy | Freedom of expression online

ENDitorial: Can design save us from content moderation?

Our communication platforms are polluted with racism, incitement to hate, terrorist propaganda and Twitter-bot armies.

By Bits of Freedom (guest author) · May 16, 2018

Some of that is due to how our platforms are designed. Content moderation and counter speech as “solutions” to this problem both fall short. Could smart design help mitigate some of our communication platforms’ more harmful effects? How would our platforms work if they were designed for engagement rather than attention?

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

The debate on how to deal with harmful content generally focuses on two arguments, or solutions if you will. The first is content moderation, the second is counter speech.

Content moderation

The relationship between governments and the big networking platforms is quite complex. Your opinion on how these platforms should operate and if they should be regulated, depends among other things on if you consider a platform like Facebook or YouTube to be the open internet, or if you consider them closed, privately owned spaces. The truth, of course, lies somewhere in the middle. These are private spaces that are used as public spaces and they are becoming larger and more powerful by the minute.

Faced with this situation, and not wanting to be seen as to be doing nothing to counter harmful content, governments are forcing platforms to take action – and thereby skillfully avoid taking responsibility themselves. The outsourcing of public responsibility to private parties – and being very vague about what that responsibility entails – is bad for approximately a gazillion reasons.

1. Platforms will over-censor: It encourages platforms to rather be (legally and politically) safe than sorry when it comes to the content users upload. That means they will expand the scope of their terms of service to be able to delete any content or any account for any reason. Without doubt this will occasionally help platforms take content down that shouldn’t be online, but it has already also lead to a lot of content being removed that had every right to be online.

2. Multinationals will decide what’s right and wrong: Having your freedom of speech regulated by US multinationals means that nothing you say will be allowed to be heard unless it falls within the boundaries of US morality, or suits the business interests of US companies. Another possible outcome: if Facebook chooses to comply with countries’ individual demands – and why wouldn’t it? – the result could also be that only those pieces of information will be allowed that are acceptable to every one of those countries.

3. Privatised law enforcement will replace actual law enforcement: Putting companies in charge of taking down content is a form of privatised law enforcement that involves zero enforcement of the actual law. In doing so, you bypass the legal system and people who should actually be charged with an offense and appear before the court, never do. People who wrongfully have content removed, have very few ways to object.

4. We’ll become more vulnerable: This normalises a situation where companies can regulate us in ways governments legally cannot, as companies aren’t bound (or hindered) by international law or national constitutions. Put gently, this solution is not proving to be an unabashed success.

Counter speech

The other proposal, counter speech boils down to the belief that the solution to hate speech is free speech. We cannot have a functioning democracy without free speech, but this argument completely neglects to acknowledge the underlying societal power imbalances, the systemic sexism and racism that informs our media, our software and our ideas.

As Bruce Schneier neatly put it: “[Technology] magnifies power in both directions. When the powerless found the Internet, suddenly they had power. But […] eventually the powerful behemoths woke up to the potential — and they have more power to magnify.” We have to acknowledge that as long as there are structural imbalances in society, not all voices are equal. And until they are, counter speech is never going to be a solution for hate speech. So this solution, too, will continue to fall short.

Design matters

This brings us to design. Just like the internet isn’t responsible for hate speech, we can’t and shouldn’t look to design to solve it. But design can help mitigate some of our communication platforms’ more harmful effects.

Here is a very recent example where we see this clearly. In 2015, Coraline Ada Ehmke, a coder, writer and activist, was approached by GitHub to join their Community & Safety team. This team was tasked with “making GitHub more safe for marginalized people and creating features for project owners to better manage their communities”.

Ehmke had experienced harassment on GitHub herself. A couple of years ago someone created a dozen repositories, and gave them all racist names. This person then added Coraline as a contributor to those repositories, so that when you viewed her user page, it would be strewn with racial slurs – the names of the repositories.

A few months after Ehmke started, she finished a feature called “repository invitations”: project invites. This basically means that you can’t add another user to the project you’re working on without their consent. The harassment she had suffered wouldn’t be able to happen to anyone again. Instead of filtering out the bullshit, or going through an annoying and probably ineffective process of having bullshit removed, what Ehmke did was basically give the user control over her own space, and create a situation in which the bullshit never gets the chance to materialise.

Taking “censorship”, renaming it “content moderation”, and subsequently putting a few billion-dollar companies in charge isn’t a great idea if we envision a future where we still enjoy a degree of freedom. Holding on to the naive idea of the internet offering equal opportunity to all voices isn’t working either. What we need to do is keep the internet open, safeguard our freedom of speech and protect users. Ehmke has shown that smart design can help.

What you can do

If you’re an activist: don’t let the success of your work depend on these platforms. Don’t allow Facebook, Google and Twitter to become a gatekeeper between you and the members of your community, and don’t consent to your freedom of speech becoming a byline in a 10 000-word terms of service “agreement”. Be as critical of the technology you use to further your cause as you are of the people, lobbyists, institutions, companies and governments you’re fighting. The technology you use matters.

Finally, if you’re a designer: be aware of how your cultural and political bias shapes your work. Build products that foster engagement rather than harvest attention. And if you have ideas about how design can help save the internet, please get in touch.

This is a shortened version of an article originally published at https://www.bof.nl/2017/12/06/can-design-save-us-fromcontent-moderation/.

(Contribution by Evelyn Austin, EDRi member Bits of Freedom, the Netherlands)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner