On the ground | Information democracy | Freedom of expression online | Platform regulation

UK: Online Harms Strategy must “design in” fundamental rights

By Open Rights Group (guest author) · April 10, 2019

After months of waiting and speculation, the United Kingdom government Department for Digital, Culture, Media and Sport (DCMS) has finally published its White Paper on Online Harms – now appearing as a joint publication with the Home Office. The expected duty of care proposal is present, but substantive detail on what this actually means remains sparse: it would perhaps be more accurate to describe this paper as pasty green.



Increasingly over the past year, DCMS has become fixated on the idea of imposing a duty of care on social media platforms, seeing this as a flexible and de-politicised way to emphasise the dangers of exposing children and young people to certain online content and make Facebook in particular liable for the uglier and darker side of its user-generated material.

DCMS talks a lot about the “harm” that social media causes, but its proposals fail to explain how harm to free expression impacts would be avoided.

On the positive side, the paper lists free expression online as a core value to be protected and addressed by the regulator. However, despite the apparent prominence of this value, the mechanisms to deliver this protection and the issues at play are not explored in any detail at all.

In many cases, online platforms already act as though they have a duty of care towards their users. Though the efficacy of such measures in practice is open to debate, terms and conditions, active moderation of posts and algorithmic choices about what content is pushed or downgraded are all geared towards ousting illegal activity and creating open and welcoming shared spaces. DCMS hasn’t in the White Paper elaborated on what its proposed duty would entail. If it’s drawn narrowly so that it only bites when there is clear evidence of
real, tangible harm and a reason to intervene, nothing much will change. However, if it’s drawn widely, sweeping up too much content, it will start to act as a justification for widespread internet censorship.

If platforms are required to prevent potentially harmful content from being posted, this incentivises widespread prior restraint. Platforms can’t always know in advance the real-world harm that online
content might cause, nor can they accurately predict what people will say or do when on their platform. The only way to avoid liability is to impose wide-sweeping upload filters. Scaled implementation of this
relies on automated decision-making and algorithms, which risks even greater speech restrictions given that machines are incapable of making nuanced distinctions or recognising parody or sarcasm.

DCMS’s policy is underpinned by societally-positive intentions, but in its drive to make the internet “safe”, the government seems not to recognise that ultimately its proposals don’t regulate social media companies, they regulate social media users. The duty of care is ostensibly aimed at shielding children from danger and harm but it will in practice bite on adults too, wrapping society in cotton wool
and curtailing a whole host of legal expression.

Although the scheme will have a statutory footing, its detail will depend on codes of practice drafted by the regulator. This makes it difficult to assess how the duty of care framework will ultimately play out.

The duty of care seems to be broadly about whether systemic interventions reduce overall “risk”. But must the risk be always to an identifiable individual, or can it be broader – to identifiable vulnerable groups? To society as a whole? What evidence of harm will be required before platforms should intervene? These are all questions that presently remain unanswered.

DCMS’s approach appears to be that it will be up to the regulator to answer these questions. But whilst a sensible regulator could take a minimalist view of the extent to which commercial decisions made by
platforms should be interfered with, allowing government to distance itself from taking full responsibility over the fine detailing of this proposed scheme is a dangerous principle. It takes conversations about how to police the internet out of public view and democratic forums. It enables the government to opt not to create a transparent, judicially reviewable legislative framework. And it permits DCMS to light the touch-paper on a deeply problematic policy idea without having to wrestle with the practical reality of how that scheme will
affect UK citizens’ free speech, both in the immediate future and for years to come.

How the government decides to legislate and regulate in this instance will set a global norm.

The UK government is clearly keen to lead international efforts to regulate online content. It knows that if the outcome of the duty of care is to change the way social media platforms work that will apply worldwide. But to be a global leader, DCMS needs to stop basing policy on isolated issues and anecdotes, and engage with a broader conversation around how we as society want the internet to look. Otherwise, governments both repressive and democratic are likely to use the policy and regulatory model that emerge from this process as
a blueprint for more widespread internet censorship.

The UK House of Lords report on the future of the internet, published in early March 2019, set out ten principles it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper introduces offers a positive opportunity to collectively reflect, across industry, civil society, academia and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this process to emphasise its support for the fundamental right to freedom of expression – and in a way that goes beyond mere expression of principle – this would also reverberate around the world, particularly at a time when press and journalistic freedom is under attack.

The White Paper expresses a clear desire for tech companies to “design in safety”. As the process of consultation now begins, EDRi member Open Rights Group (ORG) calls on DCMS to “design in fundamental rights”. Freedom of expression is itself a framework, and must not be lightly glossed over. ORG welcomes the opportunity to engage with DCMS further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for everyone.

Open Rights Group
https://www.openrightsgroup.org

The DCMS Online Harms Strategy must “design in” fundamental rights (08.04.2019)
https://www.openrightsgroup.org/blog/2019/the-dcms-online-harms-strategy-must-design-in-fundamental-rights

Online Harms White Paper – Executive summary (08.04.2019)
https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper-executive-summary–2

Online Harms – White Paper
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf

Open consultation: Online Harms White Paper
https://www.gov.uk/government/consultations/online-harms-white-paper

House of Lords: Regulating in a digital world (09.03.2019)
https://publications.parliament.uk/pa/ld201719/ldselect/ldcomuni/299/299.pdf

(Contribution by Jim Killock and Amy Shepherd, EDRi member Open Rights Group, the United Kingdom)